The past month at IFOW has seen us publish two major reports – the Good Work Monitor 2025 and our Motivating Futures report in partnership with the EY Foundation.
We have also been pushing forward case studies with firms for our work with the CIPD, exploring how HR, organisation strategy, and people development practices can lead to successful AI implementation at work.
It's also been wonderful to be doing on-the-ground field work in Lincolnshire, talking to young people about access to green jobs for our Flourish project, funded by Ufi VocTech Trust. We have a new blog post about that action-research here.
We hope that you have been enjoying the summer so far, and wish all who are able to take a break a refreshing holiday. September will see party conference season come round again, and we’ll be sharing news of our presence at conferences very soon.
Anna and IFOW Team
At the end of last year, we published the ground-breaking Data on our Minds report on ‘affective computing’ – systems that look to track behaviour and activities in the workplace. We argued there for new, freestanding rights to explicitly protect against ‘neurosurveillance’ and the use these systems to manipulate, interfere or commodify workers’ cognitive, emotional and behavioural functions and capabilities.
In a new blog post for us, IFOW Research Fellow and legal scholar Emine Akar explores the specific issue of ‘Emotional AI’. These are systems that purport to monitor, predict, and respond to human emotion. They are marketed as tools that bring empathy and personalisation to human-machine interactions and - claiming to detect how we feel - can adjust their outputs accordingly.
But emotions are complex, contextual, and culturally shaped. Smiles and tears do not have simple reads across into specific emotional states. Moreover, if you are aware that your emotions are being tracked, the natural response is to adjust. As Akar writes:
Once emotional expressions become a form of data to be measured and ranked, people may begin to perform emotions in strategic ways. Employees may feign positivity. Students may mask confusion. Customers may smile for discounts.
This adds weight to calls for new forms of protection – as Akar calls it, a framework for ‘emotional privacy’ – the freedom to express emotion without it being recorded or categorised, and the ability to inhabit emotional spaces — such as grief, confusion, or excitement — without those states being instrumentalised. Read the full blog post here.
Dynamic pricing bubbled up into major public frustration with the sale of Oasis tickets. In this article for Trust for London, our Co-Director Dr Abby Gilbert, and Head of Partnerships, Jo Marriott explore how dynamic pricing extends beyond the impact on your ability to purchase coveted concert tickets to risks of undermining workers’ rights.
Comment from our Associate Director, Oliver Nash, features heavily in this article in the Financial Times, examining the potential role of AI in the plummet of graduate job listings. Closing the piece, Nash advises that, ‘we should be wary of the dramatic headlines about AI’s impact on hiring’, pushing instead for wider consideration of other factors, including national economic uncertainty, a reduction in hires post-pandemic and increased offshoring.
This hard-hitting piece from Focus on Labour Exploitation (FLEX) examines the high-risk working reality of ‘delivery-only kitchens’, also known ‘dark’ or ‘ghost’ kitchens, where chefs face intense pressure to prep food for apps and restaurants. Often located in remote spaces, dark kitchens further exemplify the growing ‘gig economy’, where algorithmic management systems dictate working environments for platform workers.
These findings resonate strongly with our report on The Amazonian Era: The Gigification of Work which found including growing levels of physical and mental exhaustion amongst workers, as well as increasing isolation and a lack of a sense of work being fulfilling.
A new report by the National Institute for Social Research finds strong correlations between political orientations and the perceived benefits and impacts of AI use. It investigates divergent political views on AI, channelled along left/right and libertarian/authoritarian leanings.
The IFOW’s knowledge hub on politics and perceptions of automation risk further explores how the political views of workers are shaped by the anticipation of risks and threats posed to jobs by AI.
What does the Industrial Strategy mean for Metro Mayors’ local growth plans?
Following the publication of the Industrial Strategy, we published our annual Good Work Monitor, mapping productivity and six dimensions of good work across all 203 local authorities in England, Scotland and Wales.
Connected to that, this blog post by the Centre for Cities considers how the IS-8 (the eight strategically important sectors set out in the Strategy around which to build growth) are proving important to metro mayors, examining regional growth opportunities in the context of city centres.
The Productivity Institute’s flagship conference will take place at The University of Manchester in September, exploring productivity in relation to digital transformation, innovation, sustainability and governance. We were delighted to have TPI host one of the satellite venues for the closing conference of our Pissarides Review in January, and also have an expert blog piece by TPI Governing Council member, Tera Allas CBE, to accompany our Good Work Monitor launch in July.
Hosted by the International Inequalities Institute at LSE, Professor Thomas Piketty will deliver a special lecture on recent trends in global inequality and an analysis of the historical movement towards economic equality and redistribution.
The FT’s Future of AI Summit 2025 returns in November with a focus on the rapidly advancing field of AI innovation, bringing together global experts on AI governance, regulation, machine learning, generative AI applications, predictive and prescriptive AI, deep learning and more.