Blog and news
Robot arm is choosing a person out of many on a touchscreen - 3d illustration stock photo
May 20, 2022

Regulating the robots: NYC mandates ‘bias audits’ for AI-driven employment decisions

Recruitment tools driven by artificial intelligence (AI) algorithms – including game- or image-based assessments and algorithmically analysed video interviews – are becoming more mainstream, with uptake accelerated by the pandemic. The growing adoption of these tools has led to concerns about how they can be applied ethically and without discrimination.

In recent years there have been a number of high-profile cases of bias in algorithmic tools, including Amazon’s scrapped recruitment tool, which was biased against female applicants who had the word ‘women’s’ in their CV. Or there are the  reported disparities in the accuracy of voice-recognition technologies, which makes the transcription of video interviews problematic with some subgroups more prone to incorrect or missing transcriptions.

In response, there have been some frameworks, or auditing tools, developed to support the identification and mitigation of bias in AI-driven psychometric and in recruitment tools. Auditing in this instance is defined  as the research and practice of assessing, mitigating and assuring an algorithm’s legality, ethics and safety. Auditing falls within the wider field of AI ethics, where audits can act as a mechanism towards greater governance of the use of algorithms, something that is vital for high-risk and ethically critical tools, such as those used in recruitment.

In the UK, the auditing of AI-driven recruitment systems is endorsed by the Recruitment and Employment Confederation (REC) and Centre for Data and Ethics Innovation (CDEI), who recommend that recruiters using these tools should seek audit documentation from vendors before deploying them.

New York City Council, in the USA, has taken this one step further by mandating that all AI-driven recruitment tools within the city limits be audited for bias from 1 January 2023.

This legislation is a step in the right direction to ensuring that AI-driven recruitment tools are used in a safe, ethical and legal way. It is also a signal to what could come across the whole of the USA, as well as in the EU, UK and indeed the rest of the world.

While a step in the right direction, there are some areas that need further clarification and unanswered questions, which I explore below.

Who is qualified to conduct audits?

While much of the NYC legislation is concerned with defining key terms, such as what a bias audit is, there  remains a lack of actionable definitions. For example, the legislation stipulates that bias audits should be in the form of an ‘impartial evaluation’ by an ‘independent auditor’ but does not recommend these independent auditors need to be accredited. Depending on the audit approach, evaluation could require a computer scientist to examine the code, a psychologist who is accustomed to developing recruitment tools and testing for bias, and/or a company with experience conducting audits. However, some have proposed an approach similar to financial auditing, where auditors are required to have appropriate certification in an auditing the auditors type of approach.

Is there a recommended approach to conducting the audits?

We have already seen some steps towards the auditing of AI-driven recruitment tools, with pymetrics and HireVue – two companies who are using these tools – making audits of their systems public. However, the two vendors took two different approaches: pymetrics’ audit focused on the code used to create the algorithms, while the HireVue audit was based on discussions with relevant stakeholders about the bias identification and mitigation process. Since existing auditing tools can be inadequate for ensuring compliance with UK Equality Law, and there are multiple approaches to conducting an audit which may assess different aspects of a system – including governance audits to ensure that policies are followed, empirial audits to examine the inputs and outputs and technical audits to examine the data, code and method used to create the algorithm – a recommended approach to carrying out the audits would be beneficial if similar legislation was introduced in the UK.

What are the appropriate metrics to determine bias?

There are multiple metrics that can be used to determine whether an algorithm is biased as part of a technical audit, including the widely used four-fifths rule, two standard deviations, and chi-squared tests to assess the independence of scores. Many of these metrics originate in the USA and do not hold the same weight in the UK. Different metrics can produce discrepant results, and there is no guarantee of compliance with UK equality law. Further guidance on suitable metrics, and processes to manage inevitable trade-offs is therefore needed.  

Who is the audience of the audit reports?

Audit reports, based on technical or statistical analysis of algorithms, can be complex documents detailing complex methods, particularly for individuals who are not familiar with them. While in the UK, the REC/CDEI emphasise the usefulness of audit results for recruiters seeking to use these tools, the NYC legislation leans towards the audit reports being accessible to candidates who will be judged by these tools. Both parties could benefit from audit reports being made public, but for this to be effective they must be translated into a user-friendly format. The NYC statute does not stipulate this, meaning that it could be acceptable just to provide the output of analyses without giving any additional context. It would be beneficial for future legislation to outline what a desirable audit report should look like, so that the information is meaningful to the target audience, whether this be candidates, recruiters or both.

What is the level of compliance needed?

The NYC legislation states that auditing of AI-driven recruitment systems should include, but not be limited to disparate impacts (differences between demographic groups). We have previously suggested other factors these audits s could examine, including transparency (in the governance and decision-making procedures, and system explainability), safety or robustness (accurate when applied in different contexts or to different datasets) and privacy (processing data in ways that reveal the nature of the data the model was trained on, or that they are utilising when they provide an output).

The way the legislation is worded could lead to differences in levels of compliance, with some firms just doing the minimum to comply with the law while others go above and beyond.  A more comprehensive approach to auditing and guidance on when it is acceptable or appropriate to only comply at the minimal level is therefore required.

Despite these limitations, the passing of this legislation is a positive sign and may have a ripple effect – particularly across the rest of the USA where the vast majority of algorithmic recruitment tools are being developed and deployed – resulting in firms using the same or similar algorithms outside of New York City also opting to conduct audits. This legislation takes us one step closer to establishing greater transparency and helps to increase the trustworthiness of  algorithmic recruitment systems, and ensuring that any potential harm resulting from them is minimised. Similar legislation in the UK is needed, but before roll-out it must address some of these unanswered questions to facilitate maximal compliance.

You can read more about the NYC regulation, and its shortcomings in this article.

Airlie Hilliard is a doctoral researcher at Goldsmiths, University of London where she researches bias and fairness in the context of algorithmic recruitment tools from the perspective of psychology and machine learning. She is also a senior researcher at Holistic AI, a startup providing algorithmic assurance and building trust around the use of AI. Twitter: @AirlieHilliard

Author

Airlie Hilliard

Share

Sign up to our newsletter

We would love to stay in touch.

Our newsletters and updates let you know what we’ve been up to, what’s in the pipeline, and give you the chance to sign up for our events.

You can unsubscribe at anytime by clicking the link at the bottom of our emails or by emailing dataprotection@ifow.org. Read our full privacy policy including how your information will be stored by clicking the link below.