About us / Case study

Introducing Equality Impact Assessments for Artificial Intelligence

IFOW’s research on artificial intelligence and hiring has informed our work on bias at the Centre for Data Ethics and Innovation. IFOW have an ambitious agenda to advance the cause of equality and AI – and the results are beginning to show.’

Adrian Weller - Director for Artificial Intelligence, Turing Institute and Board Member, Centre for Data Ethics and Innovation

Our research, case studies and expert Equality Task Force have demonstrated the need for pre-emptive action in the design and deployment of data-driven tools at work. But there is widespread misunderstanding of the risks that data-driven systems pose for equality, and a lack of confidence among business owners as to how they can act responsibly to avoid them. To ensure these technologies do not compound these inequalities, firms need tools to help evaluate different types of adverse impacts - especially equality impacts - and choose between the different types of adjustment possible.

To promote legal compliance, better governance and best practice, we have designed a dedicated AI Equality Impact Assessment (‘EIA’). This can be undertaken as part of an algorithmic or data protection impact assessment. IFOW has published a first prototype and undertaken a public consultation to make it better.

Our idea of an AI EIA has caught on fast. We are delighted to be working with the Equality and Human Rights Commission, Centre for Data Ethics and Innovation and the Turing and Ada Institutes in this area. All of them are recommending that greater attention is given to rigorous, ongoing evaluation of AI equality impacts. In November 2020, the CDEI's Bias Review recommended that regulators should start developing compliance and enforcement tools, including algorithmic audit and equality impact assessments.

IFOW is proud to have spearheaded the debate on AI and equality impacts and looks forward to developing and piloting our AI Equality Impact Assessment. Promoting equality is more than an ethical choice: it is core to shaping a future of better work.  

Our research was cited in an a parliamentary Early Day Motion ‘Equality in Recruitment and Employment,’  Parliamentary Library brief on the Future of Work, and a Westminster Hall debate on the Future of Work

This House notes the increase in use of artificial intelligence and automated-decision-making to recruit, manage and pay workers remotely through the coronavirus pandemic; recognises that learning from past patterns of behaviour to inform decisions about the future increases the risk of deepening inequality for people and groups who are already experiencing disadvantage; further notes that fair access to, and terms of, work is critical at a time when 15 per cent of the workforce have already lost their jobs and over 2 million people are claiming universal credit; expresses concern that the auditing tools most commonly used to assess the risk of discrimination are importing legal standards from the United States which do not reflect the purpose or requirements of the Equality Act 2010; recognises new research from the Institute for the Future of Work which reveals significant limitations in exiting approaches to assessing the impacts of automated systems on equality; and calls on the Government to develop and enforce high industry standards to promote good governance and compliance of the Equality Act, to require regular Equality Impact Assessments as proposed by the Institute for the Future of Work and to review the application and enforcement of the Equality Act to ensure it is fit for purpose.

EDM 27 April 2020


Sign up to our newsletter

We would love to stay in touch.

Our newsletters and updates let you know what we’ve been up to, what’s in the pipeline, and give you the chance to sign up for our events.

You can unsubscribe at anytime by clicking the link at the bottom of our emails or by emailing dataprotection@ifow.org. Read our full privacy policy including how your information will be stored by clicking the link below.