Society must take active steps to eradicate inequalities exposed by Covid-19. Anna Thomas, Director of the Institute for the Future of Work, argues we must also beware of entrenching inequality as we roll out new technology to help us through this crisis.
Widening health inequalities are being exposed through the pandemic. A report from Public Health England has confirmed what has been suspected for some time – that risk of death from Covid-19 is higher among people from Black, Asian and minority ethnic backgrounds. People living in more deprived parts of the UK are also disproportionately affected.
Speaking at the recent Future of Work Commission (reconvened by the Institute for the Future of Work), Professor Sir Michael Marmot, director of the Institute of Health Equity at UCL described a “pressing need to examine social and economic determinants” of widening inequalities, especially the nature and quality of people’s work.
The killing of George Floyd has prompted people across the world to take to the streets in protest at systemic inequality: inequality that is woven tightly into the ways we live and work.
Inequality, and the adverse treatment or disadvantage of people because of their race, or sex, or gender identity, or disability, or because they are from a deprived background, needs to be prevented. And systemic inequality can be especially insidious and so difficult to spot, perpetuated by longstanding practices in society and subject to less scrutiny or challenge. So we need to be vigilant - and we need to do better. This is true for crime - but it is true at work too, where stakes are high: targeting inequalities at work should be a policy priority for the nation.
Use of data-driven technology at work is one area which we at the Institute for the Future of Work have identified carries some specific risks.
The potential of new tech driven by data has been one of the success stories of the Covid-19 pandemic. These technologies are helping governments, industries, companies, and individuals adapt to the upheaval. This digital tech, which feeds off our data, has helped many of us to keep in touch with colleagues, family and friends. It is enabling new work and organisational models, tracking the spread of the coronavirus, curating entertainment from our sitting rooms, even promising to help ease the lockdown.
New technology is underpinning our hopes for economic recovery and growth too as, even at the height of the crisis, new jobs are created in ICT, research and development.
Artificial intelligence tools are being hastily developed and adopted as part of the technological arsenal governments, businesses, and individuals are using to tackle Covid-19; from systems that “track and trace” people who may have been in contact with Covid-19 to cameras that check body temperature, and smart-wear such as digital wristbands that monitor physical distancing of people at work.
But new opportunities for individuals, companies and society come with new dangers. One important problem is that these systems can boil in assumptions into the system that entrench inequality and discrimination – with the people responsible for them completely unaware as to what’s happening.
Research published by the Institute in April warns that companies must carefully and seriously examine the effect of new tech on the treatment of their people. Companies often use off-the-shelf auditing tools to check for bias, discrimination and other types of inequality. But we have found many just aren’t up to the job.
These automated (or semi-automated) decision-making systems are unseen and pervasive. Whether they do harm or good long into the future will be determined by decisions we make now.
Machine learning uses assumptions based on big data to predict the future. It will identify correlations between characteristics in historic data - data about the past - to inform decisions that shape the future. For instance, a machine learning model may push job adverts at specific audiences because of how people with similar characteristics have behaved in the past.
Machine learning’s great strength is that it can continually process new information almost instantaneously, discerning patterns that would otherwise elude us. Its weakness is that it can replicate past patterns of inequality on an unprecedented scale.
Machine learning systems are evaluated by how well they appear to predict future ‘success’. But because they are not human, they do not understand that their definition of success may be entrenching inequality.
So if the click behaviour of people online is stereotyped along gender lines, for instance, men tending to check adverts for ‘NHS hospital manager’ and women for ‘social care worker’, then a machine learning model trained to estimate click probability will show men more adverts for managerial jobs and women adverts for caring jobs.
There’s also a strong business case for employers to take a more proactive approach and undertake equality impact assessments. Research shows that promoting workplace equality would drive innovation by quadrupling the number of inventors. Leaders should be astute to this risk at a time when response to the pandemic will define a company’s reputation for years.
But if we don’t take more positive action to counter the adverse impacts of AI systems at work, we risk falling foul of the UK’s Equality Act and undermining employee and public trust in technology too. So there’s a legal, moral and social, as well as business, imperative to proactively tackle different types of inequality now.
At the Institute for the Future of Work, we’re developing a tool to assess and mitigate the adverse impacts that an artificially intelligent ('AI') system deployed at work may have on equality and discrimination. The purpose of this Equality Impact Assessment ('EIA') is to promote legal compliance, good governance and best practice.
The EIA tools are aimed at helping employers to actively promote equality when automated systems determine access or terms of work, but they can also be used by others, for example engineers or platforms. We think this is one important part of the inequality challenges we face – and we’d like to hear your views in our 5 minute consultation.
AI and machine learning offers the chance to bring real efficiencies and benefits at a time of great need. But this must not happen at the expense of groups who are already experiencing great disadvantage and unequal treatment.
In our rush to reboot the economy, we must make sure that current inequalities are not unwittingly woven into the fabric of our employment systems. Failing to act now will undermine our hope and confidence in new technology when we most need it. Our thirst for tech solutions mustn’t lead us sleepwalking into a world of work in which AI unwittingly entrenches prejudices or the status quo