Blog and news
July 25, 2023

AI advancements at work sound scary, but it’s humans behind the tech who threaten jobs

More than a century ago, when cars first hit UK roads, alarming numbers of fatal accidents brought on waves of “rules of the road” to keep humans safe. The decision to force everyone to drive on the same side of the road may have looked like a restriction but served as a guardrail that remains to this day. The new car innovation wasn’t causing the accidents – rather, humans were in the driving seat.

Two hundred and fifty years ago new technologies of the Industrial Revolution brought a seismic economic and labour market shift. Factories and mechanisation transformed work – jobs were lost and new ones emerged. It took decades for governments to respond with adequate legal protections.

Alongside my role as IFOW Trustee — which believes that focusing on ensuring “good work” through appropriate workplace protections will balance innovation and public collective good — I’m also chief executive of Stemettes – who work to engage, inform and connect five to 25-year-old young women and non-binary folk to technical fields. As a result, I can see the driving role humans have in Artificial Intelligence’s potential to disrupt the future of work.

Headlines and open letters herald human extinction brought about by AI, adding to those about robots taking our jobs and marrying our grandchildren. Where technically, so much might be possible with resources and priorities, none of these eventualities is inevitable. Chemical weapons personalised to your DNA aren’t inevitable, but possible. Wall-E type enfeeblement where humans become dependent on AI isn’t inevitable, but possible. What stands between us and the AI apocalypse are the decisions made by humans: technologists themselves, governments and society at large.

At work, decisions on AI and algorithms are shaping the work we do, how we work and when we work. The Future of Work might mean a disruption of one’s identity – perhaps not defined by what we or our parents have worked as. It will definitely bring new jobs and sectors. It might even reduce how much work we need to do – any one else fancy a four-day work week?

We’ve seen companies rushing to use AI across the workplace – from collecting data to making decisions about shift patterns and performance targets; shortlisting candidates to running video interview rounds at scale. Three years ago after I’d joined the IFOW Equality Task Force, our Mind the Gap paper identified how easily these new technologies reinforce gender, racial and socioeconomic inequalities – for example recommending lower pay for women or being less likely to recommend Black people be hired for particular roles. We also highlighted gaps in protection for workers from algorithms between the Equality Act, Data Protection Act and GDPR. There remains a need for better information about why, what and how AI and algorithmic systems are designed, developed and deployed at work.

The world of work our young Stemettes and their peers will step into in a decade or two – whether in the STEM industry or not – will feature the evolution of today’s technologies and increased exposure to the associated opportunities and risks. For them and the current generation of workers, we urgently need to lean into the opportunities for AI in the workplace – ensuring “good work” for all by promoting dignity, autonomy, equality, fair pay and fair work conditions. That way the inevitable evolution of jobs can happen in a less destructive way for folks across all sectors – from lawyers to writers, from retail to logistics.

Doing this will require a great level of specificity in the guardrails, regulations and norms we set around the use of AI in the workplace. The set of fair and inclusive Digital Principles developed by IFOW with the Labour Party outlines how action and consideration need to be at the forefront well before AI is even used – right from when it is being designed. Workers should have the right to access a fair, inclusive and trustworthy digital environment at work. These AI systems should not harm their mental or physical health. And workers should be involved and properly consulted in the process of selection and rollout of AI in their workplaces. They should have the opportunity for review and redress without fear of repercussions. They must be protected from unsafe, unaccountable and ineffective AI systems in the workplace.

These principles would insure against the current state where most AI-related decisions are the preserve of the “technical elite” and those who run companies. Current discourse bemoans the narrow set of folks who are technically trained, who lack ethics training – and will end up inadvertently building weapons of AI mass destruction.

A more equitable approach to balance innovation and the collective good would drastically reduce the size of the threat to humanity. The lack of gender balance in the industry and across technical decision-making alone is holding back the potential of these technologies to solve more problems than they create. That’s before we consider the other protected characteristics and beyond reflected across the workforce and wider society. Unchecked, we’re creating systems that widen pay gaps, multiply biases in recruitment and hiring and weaken the decision-making processes of workers and their management. We’re also inadvertently creating increasingly more exploitative business models across sectors – accelerating trends that existing regulators like the ICO and EHRC are trying to counter.

In a recent IFOW study, 52 per cent of service sector workers lacked any confidence on why or for what purpose their employer was collecting data about them. I’m somewhat heartened to be seeing early work to establish modern rights for workers in this digital age across the EU, US, Australia, Japan and Canada. Here in the UK we have an opportunity to move early, move fast and establish a standard that not only protects workers but also empowers society to solve some of our biggest and longstanding issues. An intentional approach might allow us to finally make progress on some systemic failures by working with those who have been historically marginalised. Think of the good that could be done across our healthcare, education, travel and food systems when AI is applied correctly.

Thankfully – for now, people are not anti-AI. Last month when The Independent asked folks to describe their feelings toward AI, 46 per cent said they were “curious”, and 42 per cent “interested”, versus 17 per cent who felt “scared”. This trust and optimism will be fast lost if we do not create an environment that puts people first and allows them to build AI literacy.

Doing so will be good for business. AI on its own doesn’t create value for companies. Productivity returns are more likely to improve when technology investment is coupled with attention to how workers utilise these tools.

Fairness, inclusivity and safety; being consulted; human agency and the acquisition of new skills should form the foundation of the future of work. Other countries are investing to build this, as are the wisest of tech leaders.

AI offers a remarkable blend of opportunities and risks. It is a tool built by human hands with human decisions at each turn. The lack of human action on regulation at this critical juncture poses the greatest threat to workers across the country on a scale not seen since the Industrial Revolution. We must act now to build a digital economy that isn’t just pro-innovation but pro-people too. Which side of this road are we going to choose?

Anne-Marie Imafidon is the author of She’s in CTRL, and chief executive of Stemettes.

This article was first published by i-news here.

Author

Dr Anne-Marie Imafidon MBE

Share

Sign up to our newsletter

We would love to stay in touch.

Our newsletters and updates let you know what we’ve been up to, what’s in the pipeline, and give you the chance to sign up for our events.

You can unsubscribe at anytime by clicking the link at the bottom of our emails or by emailing data@ifow.org. Read our full privacy policy including how your information will be stored by clicking the link below.