Events

International learnings on algorithmic impact assessments: an APPG on the Future of Work event

The development of new technologies, including those powered by AI and machine learning, is transforming the world of work. As highlighted in a recent Institute for the Future of Work report, 'algorithmic systems are being used across the economy to control fundamental aspects of work’.

The UK's National AI Strategy also recognises that the impact of AI on the UK, and the wider world will be profound over the next decade. However there is still widespread misunderstanding of the risks that data-driven systems pose, as well as lack of confidence among business leaders as to how they can act responsibly to avoid them. There is therefore a need for pre-emptive action in the design and deployment of data-driven tools at work.

An APPG on the Future of Work event to explore how algorithmic impact assessments (AIAs) are gaining traction internationally as a regulatory strategy for addressing and correcting algorithmic harms, and specifically in a workplace context.

Watch back: 

Chair: Lord Tim Clement-Jones

Speakers:

Benoit Deshaies, A/Director, Data and Artificial Intelligence, Treasury Board of Canada Secretariat

Brittany Smith, Policy Director, Data & Society

David Leslie, Director of Ethics and Responsible Innovation Research, Alan Turing Institute

Anna Thomas, Director, Institute for the Future of Work

For more about how AIAs can provide a systematic framework of accountability for algorithmic decision making, read our policy paper.

Full discussion

Lord Tim Clement-Jones:

It's really good to be part of this really interesting panel discussion today. Not least because I based my own Private Member's bill on algorithmic decision making in the public sector on the work done in Canada on the Directive there. So Benoit, very good to see you. And over to you.

Benoit Deshaies:

Thank you. Three years ago, we adopted the algorithmic impact assessment (AIA) in Canada and this tool supports the Directive on Automated Decision-Making that sets mandatory requirements for federal government institutions using technology that's used to assist in making administrative decisions.

The Directive and the AIA are only required at the federal government level. So they are not required for industry or other levels of government, like municipalities, but as I'll discuss later, our framework can and serve to guide the adoption of algorithms in other contexts.

So in our framework, the AIA serves three purposes:

1. To guide the identification of potential negative impacts, as well as appropriate mitigation measures for those impacts.

2. A transparency measure: the AI results need to be published. And this creates accountability.

3. The proportionality of the requirements to the risk.

So it's important not to treat all algorithms the same way. Some are more impactful and should be subject to additional controls. Our AIA determines an impact level. And this impact level then determines which obligations must be met. So together the Directive and the AIA: they form a comprehensive framework to understand the impacts of algorithms for transparency, letting people know where automation is deployed, and how it's used to make a decision for quality assurance: requiring testing and ongoing monitoring, for example, and for driving consultations as well, including with legal services and experts for a peer review.

So how does the AIA let us understand the impacts of algorithms? The AIA includes around 80 targeted questions to help identify different risks. And they're classified according to different sections. We'll ask for example, what is the system deployment context? The algorithm: is it an open algorithm? Or is it a trade secret? How easily can we explain its output? What is the decision that's being automated? And what are the impacts of those decisions?

And when we talk about impacts, we're specific. We asked: how long do they last? How easily can they be reversed? Are the impacts on people's freedom, health, economic or perhaps environmental? And who is impacted? Are they vulnerable clients, for example?

And lastly, there's a lot of questions about the data that's used in the privacy protection measures around that data. So as I've said, the AIA can also support stakeholder engagements. It includes questions about internal and external stakeholder consultations, and it makes several useful recommendations for consultations that should take place. And we've just released the guidelines on what questions we need to ask certain stakeholders.

The Directive also includes a requirement for a peer review with an expert assessment. So the AIA should serve as the basis for this review. And beyond this the publication of the AIA creates opportunities to share information about the automation.

So what benefits do we get from that in Canada? Well, first, I would say that it brought broad awareness that algorithmic tools can have certain negative impacts. And it's important to identify them and manage them. The tool teaches departments looking to deploy automated decision systems, how to evaluate its impacts, and what are some of the effective measures to manage the risks. It reduces risk to institutions, but also to the clients.

And the process leads to more accurate decisions, but also more interpretable decisions that people can actually understand. And because we use a systematic approach to identifying the negative impacts while they can be reduced, the same algorithm that's deployed in different contexts will have vastly different impacts. So an AIA lets you identify the impacts that are specific to the context, and that's important.

Now we could ask what are the best practices to make sure that these benefits are realised. Based on our experience, I would say that it's not possible to conduct a full sum AIA without having a truly multidisciplinary team. The AIA tool touches on a lot of topics. So it's necessary to include a broad range of people to complete it.

This should include the developers, both internal and if there's a suppliers providing parts of the solution; business representatives that can speak to the objective of the organisation; legal services; privacy teams; policy teams, in the case of government institutions; algorithmic impact experts; and sometimes my team will provide some of that expertise, as well as operators of the system and their potential clients. In the context of work, this could be the managers that would use the systems and the employees affected by the decisions.

It's also important to conduct the AI early in the design stage when it has the potential to influence how the solution will work and be adopted. It really shouldn't be a checklist exercise at the end.

It's also important that the results be published. This creates accountability and increases quality. It fights misinformation and creates opportunities for dialogue and building trust. And lastly, I'll note that it should be mandatory. For us, it's mandatory to a policy, and that's critical to ensure broad awareness that it's important and to fight obstacles to its completion, like time to market and cost reduction considerations.

So changing topic a little bit, could the Canadian framework serve as a basis for AI regulation? I'll caveat that my comments here are personal as I'm not an expert on industry regulation. But if we limit the domain to automated systems that make or assist in making decisions that affect people, I'd say the Canadian model can serve as a very useful basis. That would cover many of the important use cases affecting workers.

In Canada our Directive and AIA informed the creation of a national standard that's also applicable to industry. Our AIA tool which is openly available has been used by other levels of government to understand and manage the risk of their automations. And there's other governments like Uruguay, for example, that looked at the AIA, had it translated into Spanish and adjusted some of the questions to their context.

So the concept of a formal assessment to understand the impacts before deployment, as well as requirements similar to those we have for transparency for providing explanations on how decisions are made for quality assurance, monitoring, testing, and a third party review and all of the rest that we have in the Directive: they're proving to be effective, and they're being incorporated in many proposals now around the world.

Now I recognise that our framework is still not perfect and requires improvement. Last week, we released new guidelines to inform federal departments how they should conduct their AIAs. In particular, we describe the information that needs to be collected before you attempt the AIA. And we also clarified what questions need to be explored with legal services and privacy experts in that guideline.

And separately, and this is probably more significant, over the last year, my team undertook an in depth review of the Directive and the AIA and made 11 recommendations on how it should be improved.

The first recommendation is really to expand the scope to include the work context. Today our policy only applies to external services. So where the recipients are outside the government of Canada. But we witnessed many concerns that people had when algorithms were used by departments to evaluate candidates in a recruitment process. So my team considers it very important to expand our framework to also apply to decisions that impact government employees. And we'll be engaging experts in the coming weeks. And we're hoping that will include the IFOW to understand how to best do this.

As part of our work to expand the scope of the work context, we'll need to consider whether new questions would be useful in the algorithmic impact assessment to help guide the reflection of the impacts on employees. And it's not something we've researched in great depth yet, but I suppose the Good Work Charter would provide a relevant set of domains for us to consider. We could ask for example, does the algorithm impact access to work? Will it change the fairness of work conditions for autonomy? How does automation change the current role and so on.

So the principles of the Good Work Charter could be turned into questions for the algorithmic impact assessment, and the questions that we already have today in the AIA would continue to be relevant and we would supplement these.

To conclude, in the Federal Government of Canada, the AIA had a significant impact in raising awareness about algorithmic impacts. We're delivering better services to Canadians and to our other clients because of it. In all sectors, algorithms will get rolled out. Without corrective actions, negative impacts may be experienced by some individuals or groups. And when the automated decisions are significant, it's imperative to make sure that they are fair.

The adoption of data driven technologies presents us with a unique opportunity to review and address past biases and inequalities and build a more inclusive and fairer society. So let's not waste that opportunity. I'll finish with this: an AI framework is an excellent tool to ensure that automated systems affecting people and workers are deployed responsibly and will produce positive outcomes. We need to balance the efficiency and the accuracy that these tools can bring with concerns for fairness and respect for the people.

Lord Tim Clement-Jones:

Two questions that immediately sprang to my head there is that you mentioned this creation of national standards, effectively through the mechanism of the AIA. How close are you to a standards body actually enshrining that? And then that being actually available to be rolled out?

Secondly, you don't mention redress anywhere. And I can't quite remember whether the Directive actually includes the right to redress. But isn't that a very important part of whatever type of regulation Government adopts for its own activities?

Benoit Deshaies:

On the national standard, there's one that's already adopted in Canada. The Chief Information Officer (CIO) Strategy Council, which is accredited as a standards organisation by our standards defining body, has released the standard on the ethical use of automated decision systems. That standard is available today. Instead of algorithmic impact assessment, it's an ethical impact assessment, but it's very similar in concept. This body aims to create standards that are applicable broadly, including for industry.

On redress, it's something that I should have mentioned. The Directive does include an obligation for departments to provide their clients with opportunities for recourse. So if they're not happy with the outcome, how should they go about requesting a review of that decision? That is a requirement of our directive

Lord Tim Clement-Jones:

I'm sure people want to follow up on both counts. Very interesting on the existing ethical impact assessment standard too. Over to Brittany.

Brittany Smith:

I'm Brittany Smith. I'm the Policy Director at Data & Society Research Institute. We are an independent nonprofit organisation. Our mission is to advance public understanding of the social and cultural implications of data centric technologies and automation.

I'll be talking about our research and impact assessments and giving an overview of a recent legislative proposal in the US. Last summer, we published a report called Assembling Accountability: Algorithmic Impact Assessments for the Public Interest. To support the algorithmic accountability conversation and ensure that the development of impact assessments as a governance mechanism is effective, the report presents a framework for evaluating impact assessment processes.

To start, an impact assessment is a process for simultaneously documenting and undertaking, evaluating the impact it might cause and assigning responsibility for those impacts. Our research shows that impact assessments are a tried and tested way to ensure that companies study, explain and report on how their proposals will affect society. And not only do impact assessments clarify the harms of specific activities or products, they also set an important standard for what is disclosed about their work.

For example, today, redesigning a highway means studying how nearby homeowners' quality of life will be impacted. Major manufacturers who build overseas factories would first assess the human rights impacts for workers, in theory. However, in our current regulatory environment, no reporting at all is required for algorithms that make critical decisions about our lives, which means we know very little about whether a preventable harm or discrimination might be pervasive in technology that's used to determine our eligibility for a mortgage, or our success in a job application.

Our report maps the challenges of constructing algorithmic impact assessments by analysing their use in other domains, including finance, the environment, human rights and privacy, and then building on this comparative analysis. The report identifies ten components that are common in existing types of impact assessment processes, in order to provide a framework for evaluating current and proposed impact assessment regimes.

We hope that this can work as a practical tool for regulators, advocates, technologists, companies, and scholars who are identifying, assessing and acting on algorithmic impacts. We wrote this report because algorithmic accountability is an important and complex conversation that's happening between a wide variety of stakeholders. But despite all this attention, we still don't have a workable process for these assessments. The effect of this is that industry will swoop in to fill the gaps, choosing the basis on which their own products are evaluated, resulting in narrower audits whose scope is prescribed in advance and that don't allow for the uncovering of novel or context specific harms, or evaluating company claims about the effectiveness of algorithmic products.

Algorithmic systems also present a special challenge to assessors because the harms of the systems are unevenly distributed, emerge only after they've been integrated into society or often only visible in the aggregate. Through this research, we encourage policymakers to pay close attention to how community advocates can be incorporated in the assessment process to ensure that assessment practices take into account the life experience of being subject to an algorithmic system and to protect the public interest.

We hope that robust algorithmic impact assessments can create a stronger basis for people and communities to act as a forum that can exert influence over algorithmic systems and hold actors involved in developing and maintaining those systems accountable. Our research also points to the need to bring together stakeholders to invest in the development of quantitative and qualitative methods to measure impacts as proxies for harms of algorithmic systems.

I'm also going to talk about the US Algorithmic Accountability Act. Before I do that, I'll just say that a lot of what I'm going to say comes from the text of the bill itself, and a section by section analysis of the bill from Senator Wyden's office. I met with and discussed these details with Senator Wyden's team a few months ago, and found that the materials they produced were especially helpful and accessible.

This is a new bill that was introduced in February by Senator Wyden, Senator Booker and Representative Clark. All kinds of companies use algorithmic systems to make automated decisions about critical aspects of our lives, including whether we're eligible to receive a loan, whether we should be hired for a job and so on. And racial discrimination in these and other domains is illegal. But because we don't have enough transparency or accountability around algorithmic systems that operate in high stakes domains, we often don't have enough information to determine whether discrimination is actually taking place.

It's possible that the negative impacts of flawed systems could have been mitigated if companies had tested their products for bias, faulty data, safety risk, performance issues, but companies often don't do this, either because they don't have the skills or they don't have the incentive to do so. They certainly don't publish or disclose information to affected communities or to the government. This makes it very hard to hold them accountable. And for consumers to make informed choices. People need more information to understand where and why automation is being used. And companies need more clarity and more structure to make impact assessments more effective. So what this bill does is that it enables more oversight and control over what it calls 'augmented critical decision processes', which means any process or procedure that uses an automated decision system to make a critical decision. The domains in which critical decisions are incorporated include health care, housing, financial services, and several other areas.

The Bill creates a requirement that companies assess the impact of automating critical decision making on an ongoing basis. This ensures the responsibility for assessing the impact is held by both the company that makes the critical decision and those that build the technology that enables that process. So companies have to disclose their status as a covered entity and submit formatted summary reports to the FTC about any impact assessment.

Two things that stick out to me as being particularly interesting: meaningfully consult with internal stakeholders and independent external stakeholders, and attempt to eliminate or mitigate impacts with a likely material negative impact.

The Federal Trade Commission, which is an independent agency of the United States government, whose mission is the enforcement of American antitrust law and the promotion of consumer protection is responsible for the enforcement of this bill. They would have to determine exactly what information an impact assessment includes, and interestingly, annually publish anonymized aggregate reports on trends to establish a repository of information where consumers and advocates can review such critical decisions that have been automated by companies, along with information such as data sources and high level metrics and how to contest decisions. They also are empowered to hire 50 staff and establish a Bureau of Technology.

Our view is that this legislation opens up the possibility for even more concerted efforts across government, academia and civil society. Chief among those efforts will be creating robust methods for evaluating and challenging AI systems through a public lens. The information provided by the impact assessments that are required under this Bill will sustain feedback loops between researchers, advocates, communities, and regulators, leading to practices in which communities that are the most impacted by AI systems have a meaningful say over how they are designed and deployed.

I'll just close by talking about some improvements that this Bill made over the last few years. There was a version introduced in 2019 and a version introduced in 2022. The 2022 version is a significant improvement. The bill co-sponsors consulted with dozens of experts and advocacy groups.

Some important changes to note are that the new Bill clarifies what kinds of algorithms and companies are covered, ensures assessments put consumer impact at the forefront and provide more detail about how the reports are structured. A major difference between the two Bills is that the 2019 version focused on and would have required impact assessments of high risk systems. The updated bill requires companies to evaluate the impact of algorithmic technology used in making critical decisions. That shift in focus from being about the nature of the AI system to the effect of the decision being made by the system centers the potential discriminatory effect of the system, which is more specific and potentially more effective.

It can also be harder to determine that a system in and of itself is high risk, especially when the determination is made separately from the context in which the system is used. So that shift in focus is important to point out.

Lord Tim Clement-Jones:

I wanted to ask you about the Bill to start with. You can introduce a bill, as we know in the UK, but the chances of actually getting it through depend very much on the politics and I wanted to get your assessment as to whether you thought that was a possibility.

But also, tactically, because you've done so much work on impact assessments, whether you might be better off engaging with the executive, ie. talking to people like the National Institute for Standards and Technology, who are doing quite a lot on things like risk assessment and impact assessment. You seem to have chosen the legislative route. Now why is that?

Brittany Smith:  

I was hoping that I wouldn't have to say something negative. But we don't know if this Bill will pass. Let's put it that way. I think the chances are not what we would like them to be, as Congress has a few other things they are focusing on at the moment. So we're not sure if this Bill will pass or what the next steps are legislatively.

And yes, I do agree it is more effective to focus on the executive branch, including the independent federal agencies that have responsibilities to uphold existing laws and the agencies that are procuring and deploying AI systems without any kind of impact assessment methodology being used.

We have been doing a lot of work with the National Institute for Standards and Technologies, also called NIST. NIST put out recently a report that we consulted on, which we thought was particularly good, on mitigating bias. They also put out a risk management framework that includes lots of the things that we talked about just now, around the importance of consultation, the importance of measuring effects of AI systems on an ongoing basis, not just pre-deployment, or as a one off. But this work is voluntary. I do think that they are a very, very influential agency, and even the publication of voluntary frameworks and standards has an effect on industry, because it gives us as advocates and the public a bar: to say, are you doing this or not. And we have a measure against which we can evaluate their claims.

So we are focused on agencies in the executive branch and we are not optimistic about legislation. But now at the very least, we have the language that we need, and the framework that we need to say: this is what Congress should be doing; this should be mandatory in the first instance. And in parallel, we can focus on voluntary work like standards.

Lord Tim Clement-Jones:

That's great because, in a sense, the two do go hand in hand. And I think the more that we understand both sides, in terms of super arching regulation and standards, I think that we can make quite a lot of progress that way. I don't think the work but on that risk assessment side of things is well known enough. Perhaps that should open that up a bit.

David Leslie:

Building on the comment that you made about politics and tactics, I think we need to focus on the political level. I'm really going to think more about the requirements that surround mandatory impact assessment. So what is the mandatory character of that? We see with UNESCO for instance, in the AI ethics recommendation, they do have an ethical impact assessment component, but ultimately, it will be up to member states to construct and execute. In the same way the Council of Europe will likely have in its legal framework, some component of human rights, democracy and the rule of law impact assessment. But the details of this, the nuts and bolts of this, will be largely up to member states who will have to construct these things.

For this reason, I want to focus today on really motivating the need for an Algorithmic Accountability Act in the UK context, that contains requirements for integrated impact assessment, new statutory duties for transparency, and public consultations. Much as we first recommended in the 2020 publication of Mind The Gap.

I'll start with a couple of critical observations before closing with some constructive recommendations. The first critical point I want to make is that there's significant gaps in the current UK statutory landscape related to the regulation of algorithmic systems, and these pose real threats to the future of good work and social sustainability. The evolving Online Safety Bill is perhaps a good starting point when one considers some of the threats posed to fundamental rights and freedoms by online digital platforms.

But this is nowhere near enough when we consider the societal hazards posed by the accelerating development of AI and machine learning across the UK and indeed the world. We should note that the phrases artificial intelligence and machine learning appear only once across the sprawling 225 pages of the Bill, and they are euphemistically nested under the term 'proactive technology'. The word algorithm appears 16 times and only as an ancillary aspect of the various wider risk assessment and system disclosure processes. But there's no direct inclusion of the functional and practical requirements for algorithmic accountability.

This potential statutory shortfall has already been recognised in the European Union, where in addition to their legal capture of online platforms in the Digital Services Act and the Digital Markets Act, they're attempting to build a juridical framework to control the risks of AI under the rubric of the Artificial Intelligence Act. Though significantly flawed, the EU AI act will have dramatic consequences for the gross post Brexit innovation ecosystem, where much in the GDPR technologies imported into the European market will be subjected to the legal requirements of the AI Act.

Without a robust UK legal framework, there will continue to be high degrees of uncertainty in the data-driven innovation domain. This may trigger a race to the bottom vis-a-vis minimalist standards compliance and thus create path dependencies of bad behaviour that have devastating long-term impacts on the UK's stature as a pace setter in the domain of responsible innovation.

We should note here that legislators in the US context, as Brittany mentioned, have already recognised some of these issues, having now introduced this updated Algorithmic Accountability Act that would mandate impact assessment for automated decision systems that make critical decisions. It would also create a public repository at the FTC of these systems to ensure a degree of public transparency.

I'll also flag up that last November, Tim, introduced a Private Member's bill, the Public Authority Algorithmic Bill, in which in the public sector context, there are requirements for algorithmic impact assessment. The Bill also contains transparency, system logging, and mandatory staff training mechanisms.

The fact that none of these aspirational ideas have legislatively materialised brings me to a second crucial point. The industrial levels of regulatory uncertainty and legal disorder caused by this gap in the UK statutory environment is vastly out of sync with and disproportionate to the rapidly expanding portfolio of potential adverse impacts that the accelerating spread of algorithmic technologies could have on people, society and the planet.

This is why we need mandatory impact assessment. As the short history of the big data revolution demonstrates, the widespread proliferation of algorithmic systems, data driven technologies and computation-lead analytics has already had numerous harmful effects on human rights, fundamental freedoms, democratic values and biospheric sustainability. At the individual or agent level, the predominance of radical behaviourist attitudes among the academic, industrial and government drivers of data innovation ecosystems have led to the pervasive mobilisation of individual targeting and predictive analytics.

For instance, in the domain of e-commerce, strengthening regimes of consumer surveillance have fueled the use of large scale behavioural technologies that have enabled incessant practices of hyper-personalised psychographic profiling, consumer curation and behavioural nudging. Many critics have observed that such technologies have tended to exploit the psychological vulnerabilities of targeted people, instrumentalizing them and treating them as manipulable objects of prediction, rather than as reflective subjects worthy of decision making autonomy and moral regard.

Analogous postures have spurred state actors and other public bodies to subject their increasingly datafied citizenry to algorithmic nudging techniques that aim to attain aggregated patterns of desired behaviour, which accord with government generated models and predictions. Some scholars have characterised such administrative strategies as promoting the paternalistic displacement of individual agency and the degradation of conditions that are needed for the successful exercise of human judgement, moral reasoning, and practical rationality.

Setting aside these threats to basic individual dignity and autonomy, the growth of data driven behavioural steering at the collective level has also generated risks to the integrity of social interaction, interpersonal solidarity, and democratic ways of life. Computational based sorting and management infrastructures continue to multiply and if left unchecked, they promise to jeopardise more and more of the formative modes of interpersonal communication that have enabled the crucial relations of mutual trust and responsibility in modern democratic societies.

This is beginning to manifest in the widespread deployment of algorithmic labour and productivity management technologies, where manager-worker and worker-worker relations of reciprocal accountability and recognition are being displaced. Deputised instead are depersonalising mechanisms of automated assessment, digital surveillance and computation based behavioural incentivization, discipline and control.

Here, the continuous sensor-based tracking and monitoring of workers' movements, affects, word choices, facial expressions, and other biometric cues converge with algorithmic models that purport to detect and correct defective moods, emotions and levels of psychological engagement and wellbeing. This may not only violate a worker sense of bodily, emotional and mental integrity by rendering their inner life legible and available for managerial intervention.It will also allow for those inner lives to be optimised for productivity.

These forms of ubiquitous personnel tracking and labour management can also have so-called panoptic effects, causing people to alter their behaviour on suspicion of being constantly observed or analysed. This can consequently deter the sorts of open worker to worker interactions that enable the development of reciprocal trust, social solidarity and interpersonal connection that is needed for good work and a sustainable workplace environment.

The labour management example merely signals a broader constellation of ethical hazards that are raised by the parallel use of sensor and location based surveillance cecum metric and physiognomic profiling and computational driven technologies of behavioural governance in areas like education, job recruitment, criminal justice, national security and border control.

The heedless deployment of these kinds of algorithmic systems could have transformative effects on democratic agency, social cohesion, and interpersonal intimacy, preventing people from exercising their freedoms of expression, assembly, association, and violating their rights to fully and openly participate in the moral, cultural and political life of their communities.

The importance of acknowledging this unprecedented scale of potential risk then leads me to my third and final set of points. These are about how to constructively confront this kind of constellation of regulatory uncertainty, legal disorder and the potential pervasiveness of adverse impacts. And how to do this with a horizontally-oriented Algorithmic Accountability Act that mandates impact assessment, proportionate stakeholder engagement, and transparent processes of risk management, impact mitigation and innovation assurance.

I'm just going to quickly go through three kindS of necessary components of having a sufficient approach to algorithmic accountability. First, we need to have new statutory duties for public consultation, as we stated in Mind The Bap. That should include mandated stakeholder engagement processes. That includes reflection on inclusivity and diversity limitations and processes that facilitate proportionate stakeholder involvement.

Second, it must include an integrated impact assessment that combines human rights and equality due diligence with setting up technical and socio technical guardrails needed for end to end responsible innovation practices.

Finally, we need to include statutory duties of transparency that mandate accessibility to information about the impact assessment practices, risk management, impact mitigation and innovation assurance measures taken across the innovation lifecycle.

Lord Tim Clement-Jones:

What I think all panellists have illustrated is the need for an overarching bit of legislation or regulation, like an Algorithmic Accountability Act, plus a tool at the base such as a mandatory risk or impact assessment. One of the big questions is the commonality across jurisdictions of the kinds of risk assessment tool that we can look forward to.

I thought you made a very strong point, David, about the need for AI adequacy for the UK framework when they come to it. We've got an AI governance white paper coming down the track. But I'm slightly pessimistic about whether we're going to get that kind of horizontal framework. I think we're more likely to get a risk assessment or an impact assessment standard.

I wondered how close you thought we are because you've done a great deal of work. The ICO has done a great deal of work; you've collaborated with them. You've done a great deal of work on the HUDERIA for the Council of Europe. And you've done work with GPAI on the justice side as well. How close do you think we are to getting a kind of standard? And when you're putting that together, are you reaching for the kind of work that Britain has been doing and the kind of work that we're doing in Canada? It seems to me that what we mustn't do is reinvent the wheel if we've got a kind of risk-based framework that we're trying to work towards.

David Leslie:

First and foremost, I think what's heartening is that in the UK context, especially in thinking about the Government’s white paper, is that there's an increasing acknowledgement of the horizontal character of the problems, the cross sectoral character of the problems, this acknowledgment that as a general purpose technology, AI is becoming pervasive as an innovation spawning technology penetrating into every domain of life.

I think that acknowledgement is creating an atmosphere where we can think cross sectorally, in terms of the normative issues that surround the governance dimension. That's something that is happening in the UK context.

We are used to a very vertical approach to some of these issues. Going from a sector upwards, as opposed to the more general algorithmic accountancy problems downward. As I see it, there's been a responsiveness to evolution of principle-based approaches in the EU context, and now a more human rights orientation, even in the US context. I think that the ears are open in the UK. I think that we are definitely on the way to retaining a place as being a pace setter. So I think we're getting there. I'm optimistic.

Lord Tim-Clement Jones:

The question I was going to ask you, Brittany, is how much do you feel we could come together for an international standard, and to what extent the EU proposed AI Act is already having an impact on other jurisdictions policy, legislation and regulation development?

Brittany Smith:

I think anecdotally, in my experience working on federal AI policy in the US, the EU AI hasn't come up very much yet. I think it could, as it progresses through the process in Brussels. But right now we do not have a comprehensive regulatory proposal on the table in the American government around AI in any form, according to any basis. The proposal for that does not exist. And anecdotally, I have found that lots of people are very confused about the EU process. So the updates, like this committee is doing these amendments, and then it goes through this and then… it's very confusing

I have found that folks need help understanding the process. Some people have put out great reports on this. The Ada Lovelace Institute just published a really good overview of the EU AI act that I can then use and send to us policymakers when this comes up to be a helpful background resource. For now, I would say it's not inspiring other people, but it's not because it's not a good law. It's because the EU process is very confusing. And we've got a lot going on in America at the moment.

Lord Tim Clement-Jones:

Yes, absolutely. But of course, if you remember what happened with the GDPR, people in major companies like Microsoft, treat that as the gold standard for data protection. And therefore, in a sense, that's where they should be thinking of, and I suspect that some of the bigger companies are, especially when people like Brad Smith are calling for regulation.

Brittany Smith:

I think that's true, although I would say we still don't have a federal level privacy law. So it hasn't inspired state level laws in California which are very good, and I think are very effective. But we still don't have a federal level law. So I think if we're asking, you know, has the EU AI Act inspired policy and legislation? I would say not yet. But GDPR is a good guide that it could.

Lord Tim Clement-Jones:

Benoit, where do you think Canada is in this debate? Because, in a sense, you've started from the top. And in a way, you're in the process of culture change, which is very heartening in many ways. Do you feel you're so far ahead that you need to look at other jurisdictions? Or how much are you kind of drawing in some of the examples that you see?

Benoit Deshaies:

We're keeping a close eye on what's happening because we introduced the Directive three years ago. And in this space, it's really ancient in a way, although it seems very recent. There's been a lot of thinking by a lot of institutes and governments around how we need to govern AI. And there's been better understanding as to what the impacts can be as well since then.

We're very interested in looking at other models as well. For example, in Europe, they've adopted a list of high risk use cases as part of that proposed law. And that's an interesting approach. It's different from our approach, where we use the AIA and it's multi-factorial, it looks at many aspects to determine the impact level.

But there are certain use cases that we know raise a lot of concerns from the onset. So is there a value to saying as soon as this use case is present, certain minimal control should be also present? Probably, so we don't want to abandon the approach with the AIA, which we still believe is a good approach, but we're curious to see how we could incorporate some of these other best practices, like the high risk use cases.

And thank you for mentioning GDPR, that had profound impacts everywhere on privacy, and something similar can be expected for the AI regulation. Certainly industry, I believe, is paying attention to that. Because what will be adopted in the EU will have ripple effects to companies wanting to do business with the European market. So certainly very relevant, but it's more for the industry side, rather than government operations.

Lord Tim Clement-Jones:

When assessing the impact of AI and employment, which groups are we concerned with workers, job applicants or broader society? How does this affect the consultation, procedure or process? How conscious are you when putting together some of the tools that you've put together? Are you all of those stakeholders, so to speak, and I'll come to Brittany and Benoit from your experience as well.

David Leslie:

I think it's important to remember that stakeholder involvement doesn't pop out of the ground like a mushroom. There are its distinctive processes and deliberate processes that need to go on in order to identify the relevant stakeholder groups in order to identify relevant power positions of stakeholders as they need to be included in a stakeholder engagement process.

There are processes of positionality reflection, reflection on the limitations of inclusion and diversity that need to happen. And then there are processes to gauge the proportionality or depth of participation. So if it's just simply informing consultation, or is it co-designing with a group; all of these things are part of a pretty well developed and well studied field of stakeholder engagement that needs to be incorporated in a context specific way to each innovation use case.

In other words, what we can provide, from a governance standpoint, are those procedural mechanisms to make sure that there's sufficient stakeholder salience analysis and positionality reflection etc. But it will be up to each specific use case to actually do the hard work of identifying the proper stakeholders and engaging them at the most appropriate levels.

Lord Tim Clement-Jones:

Brittany, from your point of view, how closely aligned to the employee-worker situation is the AI act in the States?

Brittany Smith:

To this question about which groups are concerned, the answer is all of them. I don't want to cast too wide of a net, but you can imagine easily algorithms that are being used that target each one of these examples, algorithms that manage work.

Data and Society put out a report about algorithmic management. Algorithms that manage work for workers in a factory or a warehouse, algorithms that review your resume and determine if you make it to the next round, and algorithms that affect what jobs are available, because some of those things have been automated and some are not. For each of those, you can imagine an impact assessment process that highlights the impact for that affected community.

So my answer is all of the above, but in context specific ways, which is why impact assessment methods can be so challenging, because they are context specific. But the information that we want to gather can be standardised. Where is an algorithm being used? By whom? Why? How do you contest it?

None of those answers are secrets. They should not be. And when you start asking them, people start working in deploying algorithms much differently.

Lord Tim Clement-Jones:

You're talking about a very context specific type regulation, whereas the EU is rather less context specific, talking about high risk and unacceptable risk at the top end. So we're going to have some interesting debates when it comes to GPAI, I suspect, as to whether or not you know, we can agree on, not quite a blanket, but certainly a common approach. Benoit?

Benoit Deshaies

Well, it's a similar answer, it's potentially all of these. It depends on context. For us, the Directive is really focused on administrative decisions. So decisions that will impact people's rights or privileges.

There's certain use cases that at first glance may not be administrative decisions. If we talk, recruitment and performance, certainly these would be administrative decisions. Which candidate gets the job, that's a decision. If we talk about tracking and monitoring, it's less obvious that it would be an administrative decision. But the challenge is, that can lead to administrative decisions. If you're monitoring and tracking performance, and then down the road that leads to certain actions against the employee, then that is making an administrative decision. And upstream from that was that monitoring and tracking. So we have some work ahead of us to trace what is the boundary of our Directive on those questions.

Lord Tim Clement-Jones:

Anna has the unenviable task for winding up our proceedings. So back to you, Anna, and my personal thanks to the panel. Thank you very much indeed. Really interesting conversation.

Anna Thomas:

I'll try and pull together some common themes, rather than summarising things. It's really interesting that although all our work is plainly ongoing on methods and guidance, that all the speakers spoke to the important role that mandatory AIAs could have, and the core planks this should entail, including the identification of stakeholders, and then engagement with them and on an ongoing basis, which must include workers, but should also include others, remembering that work is central to most people's lives, and the threads that connects people's experience with their communities and society more widely.

Other core elements include enabling an ongoing process for evaluation and to establish integrated AIA is within organisations, collating basic information which must be mandatory and spelled out in legislation and also have further duties and rights relating to disclosure and transparency.

The Canadian model, it seems, does serve as a framework which could apply in principle to the private sector with those additions and extensions that are appropriate for the reasons that we've all spoken to. And it's significant that Benoit has told us today that the current model will be being extended to work very shortly.

Together with the improvements made in the new guidance last week, David spoke to the gaps felt acutely at work, which showed work as both a lens and magnifier of wider challenges and opportunities. And again, this points very strongly to the advantages of mandatory AIAs which incorporate existing requirements in both law and good practice for data protection impact, human rights, safety impact assessments and others, which we have incorporated into our model at the back of the publication.

Those risks extend not just to the ones that are more usually talked about in context of the future of work, but also democratic agency, social cohesion, connection and solidarity. All this points to overarching legislation that pulls this together, building on the best of the international models that we've heard about today, and drawing, we hope, on the UK's strength in governance, law and ethics as well as innovation. If we do that, there is, we hope, a good opportunity for the UK to produce a powerful Algorithmic Accountability Act, or equivalent as part of the next phase of the UK strategy.

Lord Tim Clement-Jones:

Thanks very much, Anna. It's been a pleasure sharing today and that was a brilliant summing up. Thank you.

Date

April 27, 2022 16:00

to

17:00

Location

Zoom

Register

Sign up to our newsletter

We would love to stay in touch.

Our newsletters and updates let you know what we’ve been up to, what’s in the pipeline, and give you the chance to sign up for our events.

You can unsubscribe at anytime by clicking the link at the bottom of our emails or by emailing data@ifow.org. Read our full privacy policy including how your information will be stored by clicking the link below.