Blog and news
October 14, 2021

Should workers audit the algorithms which make decisions about them at work?

The Institute for the Future of Work is pleased to announce a new project with the ICO looking at worker involvement in auditing of algorithmic systems. This project will be led by Abby Gilbert, the Insitute's Head of Research.

Algorithmic systems are increasingly transforming our experience of work. As the Institute’s research has demonstrated, this can have detrimental consequences for equality and fairness, and lead to a range of social, material and psychological impacts on workers.

To check these impacts, and safeguard against them, there is an emerging consensus that AI and automated decision-making tools should be audited. A range of statistical auditing tools have become available, which promise to detect and correct bias in algorithms, making them ‘fair’. However, as we have demonstrated, such tools are often not compliant with UK Equality Law, are not transparent about the definition of fairness they adopt, and do not create space for workers to reveal possible impacts of the systems. 

This approach also fails to create space for workers to understand and influence how their data is processed. Auditing as it stands does not allow for fully informed decision-making by workers about trade-offs in the design of algorithmic systems. 

Policy discussions have long focussed on the need for ‘meaningful’ human review of automated decisions by algorithmic systems where they have significant consequences for those they impact. As businesses look to excel in compliance and advance best practice, guidance is needed on what involvement could look like in practice. 

The Institute for the Future of Work has identified guidance for businesses who want to allow workers to be involved in auditing algorithmic systems as a gap.

This is particularly important, given that at present there is: 

  1. A lack of consultation: Algorithmic systems are often introduced without workforce consultation, and without managerial awareness of the legal and ethical implications of their design and deployment. 
  2. A lack of transparency: Many workers are unaware of what decisions are being made about them by automated systems and how these systems are being reviewed. Further, statistical auditing tools (which are not always used) lack transparency about definitions of fairness. 
  3. A lack of understanding: The language of statistics is often a barrier to informed dialogue and meaningful human review of automated decision-making systems and prevents data protection regulation and equality law from effectively holding firms to account for the impacts of automated decision-making. 

In this project, we build on the principles of meaningful human review to consider the involvement of workers in auditing processes.

We will identify a range of methods for workforce participation in human review and auditing processes, and create a toolkit to explain why these approaches are needed. Please get in touch with abby@ifow.org if you would like to discuss or be part of this work.

Author

IFOW

Share

Sign up to our newsletter

We would love to stay in touch.

Our newsletters and updates let you know what we’ve been up to, what’s in the pipeline, and give you the chance to sign up for our events.

You can unsubscribe at anytime by clicking the link at the bottom of our emails or by emailing data@ifow.org. Read our full privacy policy including how your information will be stored by clicking the link below.