Blog and news
Concept illustration of a whistleblower.
January 4, 2021

Timnit Gebru, Google, and institutional discrimination in AI: lessons for 2021

Dr. Timnit Gebru – a prominent, Black AI researcher and former head of the Ethical AI team at Google was unceremoniously fired on 3rd December 2020. Policy-makers, ethicists and technologists across the globe have since expressed concern about the profound implications for technology workers and AI ethics more widely.

Gebru asked for legal advice on whistleblowing protection concerning ‘intimidation and censorship’ 48 hours before she was fired. In the USA, legal protection for whistleblowing in these circumstances is known to be limited. But are things better in the UK?

Timnit Gebru is a pioneer. Her work on AI ethics has examined numerous ways in which technology, training data, and targeting can -and does - structurally discriminate against women and ethnic minority communities. Her research on structural bias in large language models struck a blow at the heart of Google’s business model. From a work perspective, Gebru’s research has significant implications for access to and conditions of work more widely.

Gebru was dismissed on 3 December for an email she sent which was said by Google to be ‘inconsistent with the expectations of a GoogleManager.’ Gebru sent the email in response to Google’s order that she retract her co-authored research paper. For a former academic, an order for retraction of a paper would have been devastating: retraction calls into question the validity of the research and is associated with misconduct, such as falsification or plagiarism.  

Gebru also openly criticised Google’s lack of incentives and poor progress in hiring women (only 14%) and encouraged recipients to focus on “leadership accountability and thinking through what types of pressures can also be applied from the outside”. Her email is set out in here.

Gebru fought back. Knowing she was being managed by somebody without knowledge of an academic peer review process, she wanted to know more about the feedback received about her paper and basis of the order for retraction by Jeff Dean, Google’s Head of AI. Gebru alleges that Google was unprepared to engage in any such discussion about her research and that she was not provided an opportunity to respond. There is some dispute about how “even” Google’s  approach to the review process is.

Sundar Pichai, CEO at Google, has since pledged to investigate the matter.

Gebru tweeted her dissatisfaction with Pichai’s update, commenting on the “strategic” language adopted by Pichai, pointing out that no apology had been made and stating:

Don't paint me as an "angry Black woman" for whom you need "de-escalation strategies"

In a recent interview with Technology Review, Gebru elaborated:

I was definitely the first Black woman to be a research scientist at Google. After me, we got two more Black women. That’s, like, out of so many research scientists. Hundreds and hundreds […] It was just constant fighting.”

Importantly, Gebru has expressly noted elements of an anti-whistleblowing culture within the tech giant:

If other leaders are tone-policing you, and you’re too loud, you’re like a troublemaker – we all know that’s what happens to people like me – then if someone defends you, they’re obviously going to also be a problem for the other leaders.”

So what protection would UK law have given Gebru? Taken as a whole, her allegations and exchanges on social media point to grounds for claiming direct and indirect race discrimination under the Equality Act 2010. Liability for such claims are hard to establish, with decision-making roles and responsibility often obscured and inadequate access to relevant material, as IFOW has emphasised.

Gebru could also have argued that the dismissal  - and Google’s public pronouncements about her employment – are unfair and amounted to detriments in breach of s.27 Equality Act 2010 (victimisation). But to succeed with such a claim requires the claimant to have made an allegation specifically about a breach or contravention of the Equality Act. So an employment tribunal would have to determine whether the complaints made about unethical conduct or hiring practice met this very specifically defined threshold. General complaints about lack of diversity or ethical conduct are unlikely to be enough.

Would UK whistleblowing provisions come to Gebru’s rescue? Grievances are not enough to establish protection, which requires a disclosure of information tending to show that the law has actually been breached (or is likely to be in the future) or wrongdoing has been covered up. Again, this would be hard to establish, although if an Accountability for Algorithms Act was in force, Google would be subject to a ‘private sector’ equality duty. So, if Gebru had alleged a likely breach of the Accountability for Algorithms Act (‘AAA”), then she would be protected. But, as things stand, Google is not subject to positive duties, as a public authority might be, and so there is no breach on the table.

The proposal for an AAA in the public interest is the long term solution, but three emergency or ‘quick fix’ solutions should be considered for 2021 in the interim, given the importance of whistleblowing like Gebru’s to society, and the huge potential reach of AI at work and beyond it. Our context is common knowledge about the AI field, which has been driven by an elite, male, white workforce. IFOW’s research reinforces Gebru's own: AI systems concentrate power and divisions in society and equality must be actively and systematically promoted, rather than left to voluntary diversity programmes.

First, the UK’s whistleblowing protection should be updated. The wording of ‘protected act’ in s.27 should be widened to include something like: “making an express allegation that A or another person is not taking reasonable steps to promote and/or or is taking steps that undermine, equal opportunities.”

S.43B ERA 1996 could also be updated to include disclosure of information tending to show a failure to manage an organisation or practice in accordance with policies regarding equality opportunities. The Accountability for Algorithms Act could provide these express whistleblowing protections too.

Second, independent, algorithm audits, including assessment for equality impacts aimed at making adjustment where necessary, should be undertaken on a regular basis. This move to pre-emptive action would promote best practice and inform the growing debate on regulation of digital services and algorithms. It would also help prioritise equality and change culture at work.

Third, regulators should be supported to update codes of practice, develop enforcement tools and initiate test cases as soon as possible this year, as the CDEI, IFOW and others have argued. These could include whistleblowing, discrimination and victimisation claims.

These changes aren’t a silver bullet to solve all the problems faced by Gebru, and others like her, but they are good steps in the right direction. We think they should be implemented by responsible technology companies by voluntary amendments to their equality and whistleblowing policies right away. We need to expedite a debate on improving whistleblowing protection for AI technologists as part of the growing national and international conversation about regulation of AI in the public interest.

For the IFOW report on Accountability for Algorithms, AI and Equality, see here.

Author

Dr. Anne-Marie Imafidon and Suzanne McKie QC

Share

Sign up to our newsletter

We would love to stay in touch.

Our newsletters and updates let you know what we’ve been up to, what’s in the pipeline, and give you the chance to sign up for our events.

You can unsubscribe at anytime by clicking the link at the bottom of our emails or by emailing data@ifow.org. Read our full privacy policy including how your information will be stored by clicking the link below.