Blog and news
AI Manager Robots in an office
July 28, 2023

Making Algorithmic Management safe for workers: new regulation is needed

In the ever-evolving world of technology, Algorithmic Management (AM) is emerging as a transformative concept in the workplace. But what makes these systems different, what are the new risks to workers, and what should be done?

AM systems using artificial intelligence (AI) components are different to the typical products and services which are regulated by product safety law. This is because they are generative and emergent, giving them capabilities and competencies such that there is a level of unpredictability that creates unpredictable risks. This is particularly interesting in the new world of large language models (LLMs). The emerging risks that new technologies pose, however, are not experienced identically across the different types of data subjects e.g., consumers, citizens, or workers. Therefore, we should consider more carefully what those risks are, and what they look like in terms of employment, the employment relationship, and working conditions.

I point out in my book The Quantified Self in Precarity, AM is used to measure, monitor, and track workers’ activities by semi-automated management; but physical, social and mental wellbeing is often not explicitly linked to work stress that can be, in fact, caused by AM. All forms of automation can generate anxiety around fear of potential job loss or — for those who retain jobs — work intensification. Because psychosocial stress takes longer to manifest itself physically, it is difficult to assess and treat in the medium term. Now, with the new LLM technologies entering the market, we should consider even more urgently the ways that AI-augmented management automation can introduce even further stress dynamics, due to their seeming unpredictability, via machine learning capacities. 

For these reasons, as AM becomes more widespread, identifying and regulating its use to protect employees from psychosocial risk is becoming an urgent priority. Towards this goal (and based on the paper I co-authored for the European Labour Law Journal), I discuss below some challenges posed by AM for occupational safety and health (OSH), and then consider what legislation to tackle these challenges should do.

Watch Professor Moore's contribution to the APPG event on regulating AI:

Psychosocial Occupational Safety and Health (OSH) risks posed by AM

1. Psychological Contract and Trust

When we take a job, the ‘psychological contract’ goes beyond formal employment agreements and represents the unspoken mutual expectations between employers and employees. AM can disrupt this contract, eroding trust between workers and their employers, leading to stress and anxiety.

2. Discrimination, Bias, and Unfairness

While AM systems may seem objective, they can perpetuate biases if trained on discriminatory data. This may result in unfair outcomes during hiring or decision-making processes, impacting workers' mental well-being.

3. Deskilling and Moral Deskilling

AM's automation of tasks can lead to deskilling, where workers see a fall in demand for their acquired skills and do not need to learn new ones. Moral deskilling has to do with the loss of consideration for social protections and ethical considerations as digitalisation is integrated into work processes. In sectors like healthcare and education, this loss of skills can have broader societal implications because they have a moral and ethical dimension.

4. Worker Autonomy

AM can undermine workers' decision autonomy and value autonomy, limiting their ability to influence their work and preferences. This lack of control may lead to increased stress and reduced job satisfaction.

5. Privacy and Function Creep

Continuous surveillance and monitoring by algorithmic systems can make workers feel spied upon, jeopardising their privacy. Ensuring that an AI system has a limited (and well-described and understood) purpose, and preventing ‘function creep’ are important to secure when deploying AM.

6. Discipline

The opaque nature of AM decision-making can lead to can be used for abrupt and arbitrary termination and platform deactivation, particularly affecting gig workers with precarious employment contracts. 

7. Work Intensification

Centralised algorithmic control in AM may lead to work intensification, as workers are pressured to meet algorithm-determined targets, work pace acceleration and lowered safety and health parameters can emerge. This can result in time pressures and stress, negatively impacting workers' mental health.

Regulating Algorithmic Management for Worker Protection

To address the psychosocial risks posed by AM, a new law on Algorithmic Management with a focus on Occupational Safety and Health is needed. Current regulations like the EU’s Occupational Safety and Health Framework Directive and the Platform Work Directive address the issues, but they lack comprehensive coverage. In order to enhance these with new United Kingdom regulation, we should consider:

1. Design for Responsibility

Instead of relying solely on ‘safety by design’ principles, we need to move towards ‘design for responsibility’. The former approach is not sufficient when there is uncertainty of processes that are inherent to AI-based AM technologies. The latter approach requires shared responsibility for safety, involving users and stakeholders throughout the process of implementation and ongoing usage of AM systems.

2. Risk Assessments and Communication

New legislation should also incorporate regular monitoring and evaluation of an AM system’s impact on workers' well-being. Employers should provide education and training about all new systems; should conduct systematic mandatory risk assessments whenever introducing or modifying AM systems; and continuously carry out updated checks. Workers should be given meaningful options to opt out of any technology used at work. 

3. Consultation and Worker Involvement

A new regulation should emphasise consultation with workers and their representatives during the planning and introduction of AM systems (see, for example, IFOW’s Good Work Algorithmic Impact Assessment tool). Employers should always consult with and come to agreements with worker representatives when considering introducing a new system. This engagement will allow workers to express concerns and ensure their safety and health interests are actively listened to.

4. Reporting Mechanisms

Internal and external reporting mechanisms should be established, enabling workers to report potential health and safety risks associated with AM. Employers should not be using AM systems in ways that jeopardise workers' physical or mental health.

Conclusion

As AM continues to reshape workplaces, its impact on workers' mental and social well-being must not be overlooked. Psychosocial risks arising from AM demand comprehensive regulatory measures to safeguard employees' health and ensure responsible usage of technology. By integrating provisions into a new law (one that recognises the Digital Information Principles at Work that IFOW developed with the Shadow DCMS team as an amendment to the DPDI Bill), we can achieve a healthier and safer work environment in the era of Algorithmic Management. Through collective efforts and responsible design, we can embrace the potential of AM while nurturing the wellbeing of workers allowing – as IFOW have it – ‘innovation and social good to advance together’.

Author

Professor Phoebe V Moore

Share

Sign up to our newsletter

We would love to stay in touch.

Our newsletters and updates let you know what we’ve been up to, what’s in the pipeline, and give you the chance to sign up for our events.

You can unsubscribe at anytime by clicking the link at the bottom of our emails or by emailing data@ifow.org. Read our full privacy policy including how your information will be stored by clicking the link below.