Blog and news
Group of people being managed by digital threads
September 13, 2023

Regulating Algorithmic Management

When people discover that I work on AI and the future of work, the most frequent response is one of concern, "am I going to lose my job?"

This anxiety about technology replacing us has repeated through history, but that’s not our primary concern right now: instead, my team has been focusing on the replacement of management jobs by algorithmic systems. Initially, because ‘manager’ is not necessarily a universal term of endearment, people might think that is no great loss. But management by algorithm raises some serious social, legal, and regulatory issues.

What we have seen — starting in the gig economy in the last decade, but now coming to jobs across the socio-economic spectrum — is the automation of the full range of traditional management functions, from hiring workers, to managing them on a day-to-day basis, to firing them. This is either happening through completely substituting human management, or through the augmentation of the exercise of these functions.

There are potential positives here — in health and safety contexts, for example — but there are also well-documented problems. These range from bias and discrimination to systems simply not delivering on their claims; be very weary when an algorithm promises to provide exact measures of each individual’s team-working capabilities from a two-minute video of a group of job candidates.

Some might retort that humans aren't perfect either, that there are lots of problems with traditional management. This is true, so the key legal question boils down to the novel challenges, the new regulatory gaps that we will need to address. In Regulating Algorithmic Management, a Blueprint, we suggest three primary areas of concern: privacy harms, information asymmetries, and the demise of managerial and human agency.

Privacy harms

Employers have always collected information about their workforce, from CVs and annual appraisals through to all sorts of other personal information. But, in the hands of AI, a system can use information about individuals in a workplace and then make deductions and inferences about others – leading to what is often referred to as 'relational privacy harms.’

In this context, privacy is no longer just an issue about one individual worker but about others who will potentially be involved in that work setting in the future. This might mean identifying future workers who might be flagged as like to get involved in trades union activity, or identifying when somebody might want to exercise other legal rights and sacking them in advance. Software of this type already exists in the market.

Information Asymmetries

The second challenge regards information asymmetries that are driven by the granularity and the constancy of workplace data collection. This builds a power imbalance in the workplace as employers can then extract all kinds of knowledge about a worker — “moving the brain up the chain” as one of my favourite articles puts it. The worker (or their representative) is often unable to access to the same data — and may not have the means to process it in a meaningful way if they do have it — making it difficult to understand if discrimination of bias has occurred.

The demise of human agency

The agency that managers have traditionally had to run a workplace is increasingly being diminished by the implementation of algorithmic systems.

To give an example, some years ago a group of workers at a major online retailer claimed that they had been sacked for trying to form a trades union. The company’s defence was that local managers who operated the plant neither understood nor controlled the system that fired the workers. The lack of human agency cited in defence — people managing a warehouse neither able to understand nor have any control over the systems that sack people — is an extraordinary place to be.

There is an 'implied term of mutual trust and confidence' in every employment contract that has been recognised by the House of Lords and subsequently the Supreme Court. The law asks businesses to exercise management responsibilities; the moment that these become dissipated into the cloud, the regulatory challenges really become rather serious.

Three areas for action

What should our response be? In the Blueprint, we set out three kinds of regulatory approaches.

First, we call for some red lines, a limited list of things that should be completely banned. This would include trying to work out when somebody is likely to exercise a legal right in order to terminate their employment.

Second, we also call for full information rights, both for the individual worker and their representatives. If companies simply flood people with sheaves of technical information and data, they will be overwhelmed by it. By having collective avenues of information expertise can be built up and the information asymmetry ameliorated.

Finally, we call for meaningful avenues of contestation. This does not mean that humans should be ‘in the loop’ at each point, as forcing this may well be counterproductive. Instead, we need humans before, above and after the loop — creating clear points of conversation before systems are implemented, as their impacts are assessed, and after any decisions are taken. The use of Algorithmic Impact Assessments is something that is dealt with in our recent special issue.

However, it’s important to understand that calling for new regulation in certain areas does not necessarily imply that existing regulatory regimes aren’t working. The interesting question is trying to work out what the novel regulatory challenges are. For example, when it comes to discrimination protection it doesn't matter whether a human, or an automated system, discriminates — it's illegal under the Equalities Act, and may well constitute direct discrimination.

There are clear potential positives in algorithmic management. But in order to harvest the benefits and avoid the significant risks, we need a solid regulatory framework to ensure a level playing which will allow us to reap the benefits – and avoid the perils – of AI at work.

Prof Jeremias Adams-Prassl gratefully acknowledges funding from the European Research Council under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 947806). This blog post draws on work with Aislinn Kelly-Lyth, Halefom Abraha, Six Silberman, and Sangh Rakshita.

This post was prompted by Prof Jeremias' presentation at our APPG on the Future of Work session on workplace data rights in the age of AI. His presentation there can be watched below:

Author

Professor Jeremias Adams-Prassl

Share

Sign up to our newsletter

We would love to stay in touch.

Our newsletters and updates let you know what we’ve been up to, what’s in the pipeline, and give you the chance to sign up for our events.

You can unsubscribe at anytime by clicking the link at the bottom of our emails or by emailing data@ifow.org. Read our full privacy policy including how your information will be stored by clicking the link below.