Dear Friends of IFOW
This month we’ve supported the All Party Parliamentary Group on the Future of Work’s final report for the inquiry into the New Frontier: AI At Work. Today, we’re launching our own policy paper and toolkit to follow this up: Building a Systematic Framework of Accountability for Algorithmic Decision Making. The paper builds on the Cabinet Office’s excellent new transparency standard for the public sector and international UNESCO Global Agreement on AI, steered by the UK’s Turing Institute’s Ethics team. Be the first to read it here.
It’s designed to support the second phase of the AI Strategy, helping to shape responsible innovation and meaningful accountability through insights from our research on use of AI and algorithmic systems at work. As the deadline for the UK’s AI Regulatory Framework fast approaches, we think that a sharp focus on work shines a light on the toughest challenges and best models for AI regulation. Read more below…
Meanwhile, on the global stage, the White House’s Office of Science and Technology Policy launched a fact-finding mission aimed at a new AI Bill of Rights. NY City Council passed a bill that ensures people are told when an automated system is making a hiring decision about them, or evaluating their performance at work. Spain is supporting the transition to hybrid work that includes a ban contacting workers outside core hours. And the Canadian Government are reviewing their Directive on Automated Decision-Making which has been up and running since April. Read more here.
Anna and the IFOW team
Institute for the Future of Work
The UK Government recognises the need to build on new algorithmic transparency standards in the public sector, and is considering the merits of a unified impact assessment as a form of assurance for AI. A framework of accountability centred around the performance of Algorithmic Impact Assessments across value chains and innovation cycles could set global standards for the development of responsible technology in the work place and beyond.
Drawing on the UK’s strength in law and governance, as well as innovation, our insights from research on automated decision-making in the work context, suggest that four essential ‘planks’ for the impact assessment should be set out in primary legislation. But regulation will also need to allow for context-specific responses as sectoral guidance is developed and regulatory sandboxes test elements of the assessment.
The four planks of an effective AIA are
Identifying individuals and communities who may be impacted. Key sources for performance of this stage are the Turing Institute’s ‘Stakeholder Impact Assessment’ and Human Rights Impact Assessments. This step should form the basis for multi-stakeholder engagement throughout the process.
Undertaking a risk and impact analysis before deployment. This step should start by disclosure of the real purpose, scope and intended use of the system. It is aimed at identifying pre-emptive actions, which may be technical or non-technical. Organisations must be clear about the methods and metrics they select.
Taking appropriate action in response to the ex ante assessment. Our research shows that common tools and approaches tend not focus on enabling appropriate, context-specific mitigation. Mandating for ‘reasonable and proportionate’ steps, which would depend on resources, severity of harm and proximity to harm, would steer regulators and organisations to this essential step.
Establishing a process for continuous evaluation and response. This step would ‘future proof’ the model because it would allow for the discovery or new or unexpected impacts. It also allows for continuous stakeholder engagement, recognising that some impacts are evaluative and may be widely allocated.
The Ada Institute has published Regulate to Innovate following a series of expert workshops. Ada proposes new, overarching, domain neutral statutory rules rooted in legal and ethical principles and highlights the scale of the task to come.
At what scale is it fair to determine AI regulation? This video captures a fiery debate about the EU AI Act hosted by the Institute for Ethics in AI. Watch Professor Jeremias Adams-Prassl (find his shelf in our Knowledge Hub, here) explain how the current proposal for ‘The EU AI Act would not create a baseline for AI regulation, but a ceiling’.
Data & Society have released two papers with policy recommendations on human rights and algorithmic impact assessments as tools with the potential to help measure, mitigate and hold people to account for algorithmic harm.
A new book by Alessandro Delfanti, who previously analysed the way Amazon patents reveal a vision for workers as the ‘sensing appendages’ of machinery, sets out a comprehensive take on workers vis a vis robots inside the behemoths four walls. This covers Delfanti’s recent findings - which complement our own - about the way technology learns from the actions of workers in real time.
By contrast and for a more hopeful discussion, this debate by US academics Isabelle Ferreras, Veena Dubal, Juliet Schor and Nicole Moore considers whether the gig economy could provide the seeds for a more democratic form of work organisation.
Join us at Tech UK’s Digital Ethics Summit next Wednesday 8th December at 13.25.
Join Anna Thomas in conversation with Head of UK Govt Office for AI Sana Khareghani next Thursday 9 December at 13.30.
Join the Institute for Employment Studies to discuss the findings from Not just any Job, Good Jobs, on young people’s views of good quality work and support, the quality of their experiences in work, and the impact of the pandemic.
If you have any ideas, comments or suggestions for our round-up, please drop us a line at firstname.lastname@example.org.
Thank you for your time and interest. If you enjoyed this and know someone else that can benefit from our newsletter, please share it with them. If someone has forwarded this to you but you would like to receive this update yourself, please subscribe here.