Blog and news
July 12, 2018

Exploring the use of AI, algorithms and automated systems in the work space

Creating space to bring diverse voices together and opening up critical and informed dialogue is always important. But it’s especially important at this stage of building an institute. We’re delighted to have co-hosted a workshop with the Oxford Internet Institute last week to do just this: exploring the use of AI, algorithms and automated systems in the work space, capturing different perspectives and collaborating to map out some of the most pressing issues. As this briefing note sets out, the workshop’s goal was to inform development of our ‘Promoting Equality Through Transition’ programme and identify immediate priorities.

The workshop came at a time of change on multiple levels. So, some of our take-aways may resonate for other institutions being established to navigate the impact of technology on society, including the Centre for Data Ethics, which is open for consultation. The story of growth, managing transition fairly and mitigating societal side-effects is a story of two ‘i’s: ideas and institutions.

“‘We must remember that the context for this discussion is work and the everyday experience of people in and out of work through this period of transition.’ ”

— Helen Mountfield

The first major challenge identified was the lack of an evidence synthesis on the various uses of AI, algorithms and automated systems at work. The nature of the game is that we often don’t know if or how employers or agents are applying algorithms or automated systems. So, we must seek a higher level of disclosure and bring together information from a range of sources and disciplines to inform specific issues. This will involve more than traditional collection and analysis. New types of social research and platforms for information-sharing should be combined with more academic approaches.

The workshop highlighted that traditional actors are missing either AI or labour law perspectives from their policy discussions. By contrast, academia tends to operate in discrete disciplines and may miss the sharp end of everyday experience. These different perspectives and sources should be connected to identify and fill evidence gaps. The OII’s important cross-disciplinary work on labour markets and the Royal Society’s evidence review will offer sound bases from which evidence syntheses on specific topics can be built.

“‘Algorithms have been around since 19th century Prussia. What’s new is the scale of use combined with data collection. This makes use faster, more frequent, more pervasive and more individualised.”

— Vili Lehondvitra

On a ‘macro’ level, we see an urgent need - like the AI Select Committee - to develop a more accurate understanding and forecasts for use of AI and the scope and pace of labour market changes. Machine learning - as Michael Osborne has shown - can be leveraged to fill knowledge gaps and predict changes to the skill and task content of jobs over time. This should help assessment of ‘good’ and ‘bad’ implications for equality, including the distribution of good quality work, changes to quality work and risk.

A second, related challenge is the need to explore the extent and application of existing legislative frameworks that bear on use of AI at work: employment, equality, data protection, health and safety laws in particular. Now we’re in post-GPDR territory, we’ll need an impact assessment of how new data rights are working, including data subjects’ rights to an explanation of some uses of personal data by (mostly solely) automated systems. Participants had different views about the remit and accessibility of the new beefed-up data rights, with experts in data protection taking a more optimistic line. We think that case studies, test litigation and opinions from the ICO, EHRC, Director of Labour Market Enforcement and EC Data Protection Board will help inform debates and get them rolling. In the meantime, it’s important to work with our regulators, making sure they’re informed and resourced to act as necessary.

“Keep the innovation paradox in mind: are these new technologies and practices actually changing the underlying business model?”

— Jeremias Prasl

Moving forward, the group felt that we should look to extend the application of existing legal frameworks where we found evidence of specific gaps in protection. It’s easier, for example, to build on the model of the public sector equality duty - and ask private actors using AI to make decisions about individuals to consider the equality consequences of doing so - than it is to develop a new right to ‘fair treatment.’ For now, employers should be encouraged to adopt generous interpretations of the GDPR and volunteer information about the use and purpose of algorithms; the characteristics of the input that can change decisions; and how the output is tested for bias. Kitemarks could be developed to promote responsible corporate conduct beyond legal imperatives. Good conduct should be good for business too.

“It’s a myth that there’s a conflict between the power, efficiency and interpretability of an algorithm. Seeking an explanation about how the characteristics of the input can be changed to change the decision is key.”

— Michael Osborne

This will take time. So, given the pace of change and concerns about access to actionable information and remedy, a parallel debate about the merits of developing some positive responsibilities with employers (and agents and platforms) would add value. This should include a discussion of what we mean by ‘fair’ treatment at work.  

A third challenge - or ‘bucket’ - is educating, skilling and re-skilling our workforce focusing on future needs. This challenge cuts across all questions and priority areas, raising its head at the most unlikely moments. By the end of our discussion, there was broad agreement that increased ‘AI education’ was needed to equip the work force and the public to engage in a properly informed dialogue about use of AI; to support its creative design and ethical use; and as part of life-long learning to ease workforce transition.

There was also widespread agreement that the ‘education bucket’ should go beyond support for AI-related and digital skills, and include education to prepare for the ethical and social challenges which arise. Challenges extend to how we prioritise, value and teach our most human ‘future-proof’ skills: creativity, social skills and critical thinking. These twin priorities should compliment each other and shape a future of in which human skills are augmented, work made better and benefits spread.

As our youth voice workshop on 5 July highlighted, AI and related technologies should be piloted to support future skills, enhancing individual and collaborative learning, re-skilling and job-matching. These challenges are set to become more acute.

A final challenge is to feed the discussion about guiding principles for policy orientation. Victoria Nash, Deputy Director of the OII, made closing remarks in which she welcomed the Commission’s foundational principles, which were informed by the work of Commissioner and Public Philosopher Michael Sandel. Work should provide dignity, autonomy, security and good work should be available to all. IFOW should connect individual pilots with communities in transition to a national dialogue about our guiding principles and solutions aimed at realising them. At a time when one in eight workers in the UK already live in poverty - often in sectors susceptible to the adverse affects of automation - we think it’s important to build this public dialogue.

“If you don’t intentionally include, you’re going to unintentionally exclude. One of the best ways to do that is to think about this notion of ‘fairness’. This must be a national conversation.”

— Anne Marie Imafidon

So, IFOW will be launching a consultation on the Charter for Good Work as a framework to explore foundational principles and examine individual applications. Hot on the heels of our workshop on youth voice, we’re thrilled to have Stemette Floriane Fidegnon-Edoh advising us on how to engage a wider audience in this conversation.

We've kicked off by discussing these examples of individual applications: micro-targeting, automated pay and performance management functions, data portability and monitoring pay gaps. We mustn’t forget that AI related technologies have the potential to break down barriers and improve levels of transparency, although we felt this hasn’t been unlocked.

We’re grateful to Michael A Osborne, Anne-Marie Imafidon, Vili Lehdonvirta and Jeremias Prassl for their incisive presentations; Victoria Nash for her masterful summing up; Helen Mountfield for facilitating the workshop and lively table discussions; and everyone who contributed time and ideas - from government, industry, academia, think tanks and youth groups. Thank you - and watch this space!

Author

Share

Sign up to our newsletter

We would love to stay in touch.

Our newsletters and updates let you know what we’ve been up to, what’s in the pipeline, and give you the chance to sign up for our events.

You can unsubscribe at anytime by clicking the link at the bottom of our emails or by emailing data@ifow.org. Read our full privacy policy including how your information will be stored by clicking the link below.