Events
Top View Of Robotic Arms Working On Conveyor Belt In Automatic Warehouse

Using AI in the workplace: ethical risks and policy responses

A Future of Work seminar with Angelica Salvi Del Pero and Peter Wyckoff, OECD.

Artificial intelligence (AI) systems have much potential to improve workplaces, but ensuring a trustworthy use means addressing the ethical risks they can raise.

This Future of Work seminar discusses four key areas of risk – human rights; transparency and explainability; robustness, safety and security; and accountability – and ongoing policy action in OECD countries including existing legislation, society-wide initiatives on AI,  as well as new workplace-specific measures and collective agreements.

Watch back the session: 

About the speakers: 

Angelica Salvi Del Pero, Senior Advisor – OECD

Angelica Salvi Del Pero is the Senior Advisor to the OECD’s Director for Employment, Labour and Social Affairs and she leads the work on Ethics of AI adoption in the workplace for the OECD’s project on AI in Work, Innovation, Productivity and Skills. She previsously worked as an economist in the Social Policy Division of the OECD and, before joining the OECD in 2010, a consultant for the World Bank and a research fellow at Centro Studi Luca d’Agliano. Angelica holds a Ph.D in Economics from the University of Milan. Angelica tweets at @Angelica__Salvi.

Peter Wyckoff, Junior Policy Analyst – OECD

Peter Wyckoff is a Junior Advisor to the OECD’s Director for Employment, Labour and Social Affairs. Peter holds a Masters in Global Governance and Diplomacy from the University of Oxford. Peter tweets at @PeterSWyckoff

Edited transcript:

Angelica Salvi Del Pero: I'm Angelica Salvi Del Pero from the OECD. It's such a pleasure to be here with Peter Wyckoff, my co-author on this paper. We thought we'd present the main findings of the paper, and then leave ample time for discussion in the second part of the meeting.

So this work that we're going to present today is part of the project that the OECD is doing on AI in work, innovation, productivity and skills. With this project, we want to develop evidence and policy recommendations on the impact of artificial intelligence in the workplace, and how that is evolving in OECD countries. As part of this work, we have set out to understand what needs to be done to ensure that when artificial intelligence is adopted in the workplace, it is done in a trustworthy way.

Comparable evidence on the use of AI systems in the workplace is still quite scant. We are actually doing a survey to gather information on how much AI is used in the workplace, especially we're looking at two sectors, finance and manufacturing. And this evidence will be released in December. But in the meantime, the evidence that we have found is that while it's still not the majority of firms that use AI in the workplace, this use is increasing. This chart that you can see, uses evidence from a 2019 artificial intelligence survey, and it shows that 70% of the firm's interviewed were using AI for human resource functions.

The literature also shows that of course, there are many benefits to the use of artificial intelligence in the workplace. We can, for example, mention the potential increase in productivity, augmenting capabilities of workers, which will help improve inclusion especially for people with disabilities or with less strength for some tasks. AI can also improve safety of workers in some tasks, and reduce the amount of tedious tasks that they have to do.

But at the same time, there are also some risks: the risk of automation leading to lower employment rates for certain groups, is not something that we look at in this paper. But there are more generally ethical concerns that are raised when we use artificial intelligence in the workplace. So looking at these surveys, conducted by BCG Gamma. They found that a large number of workers were expressing concerns. For example, 65% of the interviewed workers were worried that using AI in the workplace would dehumanise the work that they do, and 64% were worried about protection of their personal data when these systems are used in the workplace.

With this paper and this work, we aim to define the main principles of trustworthiness that need to be upheld to ensure that the use of AI is a ethical. We take in this work, a risk management perspective, meaning that we focus on preventing harm. This also means that in our paper, and in the presentation that we're giving today, we do not really give a balanced view of the benefits and the potential risks, we focus on the risk because of the approach that we take.

Moving on to the main risks that we see, we are using the OECD principles for responsible stewardship of trustworthy AI. These are principles that were adopted in 2019, and have since been adopted by more than 40 countries, in the OECD and beyond.

According to these principles, we see four areas of risk that need to be addressed, to ensure that the use of AI is trustworthy. The first is human rights. The second is transparency and explainability. Then we have robustness, security and safety. And finally, accountability. I will go through these areas, giving some examples of what we mean, and then Peter will present the ongoing policy responses that countries have started to take to address these issues.

Moving to human rights, the first sub area that we want to discuss is the respect of privacy. It's important to remember that monitoring of workers is definitely not something that started with the use of artificial intelligence, nor with digital technologies. It's much older than that. But some of the characteristics of AI systems mean that the amount of monitoring that can be done on workers is much more intense and potentially risky. So, for example, the deployment of predictive models, and the processing of unstructured data, along with the use of biometric data, or facial recognition systems means that the amount of information that can be processed and used to monitor workers is much larger, and is also transformed in nature. In addition, the amount of monitoring, especially for remote workers, is reported to have increased significantly during the COVID lockdowns, when firms were all of a sudden faced with having to monitor the productivity of their staff, while working remotely.

We found for example, reports of software that was installed on staffs computer without their consent, that was taking pictures of workers while they were working, or they were monitoring the activity of keyboards and so on and so forth. In addition, we also have found evidence in the literature of companies that would monitor workers' activities outside the workplace, such as looking at their location, with the intent of monitoring their collective action, and trying to discourage it. Of course, this breaches private privacy rights of workers and is something that needs to be addressed.

The second risk that we see in this area to the autonomy and the agency of workers. I've said that by using AI we can improve the quality of some jobs by automating tasks that are repetitive, that are boring, that are uninteresting, and this is true, but there is also a risk that when this is done systematically, when the tasks that people do are managed by AI more intensively, the autonomy and the agency of workers is reduced, and this in the longer term can lead to detachment from the workplace, and the satisfaction that workers can draw from working reduces.

Algorithmic management in its purest form -- meaning that there is an algorithm that decides what each worker needs to do at various moments in their working days is still quite limited at the moment, but potentially this is the direction that is dangerous. For example, there are reports of medical staff that use AI to help with their medical diagnoses, and doctors report that they often face pressure to comply with the recommendation they're given in order either to ensure efficiency, or to avoid problems because they have different opinions. The risk there is in the longer term that we have a reduction in creativity, and in innovation because of the systems.

The third area in this category that I want to discuss is fairness, bias and discrimination. Instances of bias and discrimination in AI systems are quite widely reported. It's important to keep in mind that using AI and making some of the rules for some process in the workplace, such as HR processes, can help reduce bias and discrimination because by simply formalising them, we're making it clear what the rules are and making workers think about what they need to do in order to be fair in their decisions. In addition, the use of AI in hiring can increase the pool of candidates that employers are able to consider when they process a job opening, and therefore improve inclusion in the labour market.

Yet, we know that AI systems struggle with bias at the data level, where the data is used to make predictions now replicates some past biases. But there is also bias that existed at the system level, because when the systems are designed, some of the variables that are considered in the algorithms, or even just the type of data that are chosen to develop them and then implement them can result in discrimination.

The risk there is that compared to decisions that are made by humans, and that may be biased themselves, using AI systematises some of these biases, replicating them at scale in all the decisions that are based on these systems.

We will now talk about transparency and explainability as the second core dimension of trustworthiness. The first point here is awareness of interactions with AI systems in the workplace. Workers are not always aware that they are being assessed or processed by AI, this is the case typically in hiring. And even if employers are supposed to obtain consent for the use of personal data, it's really hard for workers to deny this consent, either when they're already employees or before.

This means then that the ability to trust and understand the outcomes of AI system is sometimes hindered, because if a person is not able to know that they're interacting with an AI system, and if the use of these systems are not transparent, then it's very hard for workers to have understandable explanations of why certain decisions were taken about them. And this is an important right when it comes to the workplace because it means that without an explanation, it becomes very difficult to rectify them if there's a case where a worker feels that a decision was unjust, or they want to challenge it with the employer. These risks are especially common for off-the-shelf AI systems, which offer less control over the design, development and application of the system itself.

It's true that for some uses of AI, we don't really care too much that there's full explainability, and an ability to rectify outcomes e.g. Netflix, but in the workplace this is a really important dimension. For the ability to rectify outcomes, one example is platform workers were who are facing especially difficult decisions, both because algorithmic management is much more common in this context, and because being not full employees they have less rights in the sense and so, this is a particularly sensitive areas where we need to be aware of the risks of not being able to explain why some decision are taken, for example, on who gets the best shifts, or why they're being removed from or from the platform itself.

The third area is the part on robustness, security and safety. There are multiple examples where AI can improve safety. An important one is in the recycling industry, which is an industry where there is a really high risk of workplace accidents, and the use of AI has been proven to improve things for workers there. But if not implemented well, AI can either raise risks because of interaction with robots in the workplace, or because of the intensification of work through monitoring of workers, and this has been typically the case in warehouses that use these systems.

The use of AI can also heighten digital-security risk because of the type of information that the systems collect and develop about workers. This is often sensitive information that in the case of a cyber attack, can be released and disseminated.

The fourth and last point is accountability. It is often inherently unclear who is responsible, legally if something goes wrong with the use of AI in the workplace. Is it the programmers? Is it the developers? Is it the companies that commercialise this product? Or is it employers? We find that if the developers and designers are not involved in these processes, most responsibility would fall on the employers and this risks exposing employers too much risk, especially small ones that may be less financially able to bear the cost of potential lawsuits. In fact, this is the case so far, we've seen in case law that the employers have, in some cases been held responsible.

The approaches to improve accountability in the workplace, have gone from auditing to having human-in-the loop or on-the-loop meaning involving them either in vetting the final decision that is proposed by the AI system, or being able to check such decision. With this, I will pass the floor to Peter who will talk about the policy responses that we've seen so far.

Peter Wyckoff: Good morning, everyone. So I'm going to briefly run through some of the policy responses we're seeing. Obviously, the point of this project with the OECD is to develop good policy recommendations for policymakers and other stakeholders. At this point, we're just gathering information about what countries are doing, and a lot of what countries are doing is just applying existing policy, and that really needs to remain the foundation of policy responses. On the whole, there's a lot of really relevant existing policy that can be applied to questions about the ethics of AI in the workplace. There's also a lot of self regulation and co regulation going on, and experimentation. And then there's development of new policy as well. I'll talk about some of the new policy we're seeing towards the end.

These policies fall either as general policies that have workplace relevant elements, and they can also just be designed specifically for the workplace. So today, I will first look at the first column of existing policy, both general and workplace specific, and then we'll move to talk about new policy, whichever types they might be, both general and workplace specific.

So speaking about existing legislation being applied to ensure the ethical use of AI in the workplace. Probably the most evident example of this is anti discrimination, and we held an event in February with the Equal Employment Opportunity Commissioner Keith Sonderling, from the United States, Australian Human Rights Commissioner, Lorraine Finley and the Head of Research and Data Unit at the European Union Agency for Fundamental Rights, Joanna Goodey, to talk about this because this discrimination policy in workplace is often very well written out in OECD countries, and it's already become very clear that there's some challenges as it comes to AI use, as Angelica describes.

Even if the existing legal framework is strong, enforcing it, when it comes to AI is complex, standard tools can help improve that potentially auditing and we'll see how new policies have been developed this to increase accountability and transparency of AI systems. But there are concerns about also specific legal frameworks in the United States, for example, a lot of the onus is based on specific individuals for seeking redress in the case of employment discrimination -- a specific worker needs to file a lawsuit to say they think that they've been discriminated and then the Equal Employment Opportunity Commission will investigate. It's really hard in some contexts for workers to be aware that AI has been used, and they could have potentially been discriminated by AI. So there there are potential fixes to that. But still, for the most part, the foundation will be exisiting legislation.

Another example of existing legislation being used is deceptive practices and consumer protection. In the United States, HireVue, which is a big company that helps employers hire applicants, had a complaint placed against it by the Electronic Privacy Information Centre (EPIC), stating that their systems weren't working as advertised, they were claiming to be perfectly objective, and EPIC claims that that was actually not true, that they were biased. And as a complaint went to the FTC HireVue decided to change its algorithm to comply with complaints. And so one way that some of the ethical issues can be addressed, is just making sure that advertising statements such as "we're perfectly objective, we're more objective than human employers" are actually being met by companies.

On data protection, Europe and in the United States have extensive legislation in place in the GDPR. In Europe, Article 22 of it [GDPR] already specifies that no employment decisions should be made entirely in an automated fashion. The courts are investigating to what extent that will still be true right now, but there are also a number of rights that are enshrined in GDPR that are relevant for the data that AI collects, whether it's the right to transparent information, the right to access the data, or the right to rectification, all these concern worker data.

Finally, legal rights to process are being used to challenge use of AI systems in the workplace, especially uses relating to decision-making processes. For example, in the case of algorithms used to assess school teachers in the US, a lawsuit was brought in Houston where the undisclosed functioning in the algorithm was used to say it can no longer be used to assess teachers, and they have a right to understand how decisions about their employment are made.

Moving now to society-wide policy. This is a big category, there are a lot of proposals coming out right now, a lot of countries have put forth policy principles, AI strategies, a number of countries working on technical standards. The United States, for example, has NIST that has been tasked by Congress to develop standards for AI use that will be ethical across society. These are evidently relevant for the workplace as well, notably those that relates discrimination and trying to prevent discrimination by AI system.

The EU has also proposed the AI Act, which is probably the most famous society-wide proposal currently out there in discussion. The proposed categorisation ranges from "minimal" to "unacceptable" risk for different AI systems. Generally, all AI systems in the workplace will be considered high risk, which will lead to requirements such as the data protection and transparency and human oversight just to mention a few. We also are looking at the US Algorithmic Accountability Act, which was introduced in 2019, and has had some amendments that have come in. However, there's not been a lot of momentum on this, unfortunately, but there are other state-level proposals, many of them adopt a similar approach to the EU framework where they classify AI systems by risk level and then attach requirements or ban them depending upon the level of risk. In one of the amended proposals to the Algorithmic Accountability Act, all employment AI systems were categorised at a pretty high level of risk.

Finally, there are policies specific to the ethical use of AI systems in the workplace. These are cropping up all over the board, and so those four coloured circles on the left, highlight what we've seen emerging as policy priorities that really focus in on AI in the workplace. Discrimination, explainability, ensuring informed consent, and accountability.

In the United States, as is often the case, a lot of experimentation happened at the state level. You've had a number of states and even some cities that have proposed legislation that focus specifically on hiring and recruitment and the use of AI in those processes. The State of Illinois Artificial Intelligence Video Interview Act requires employers to inform candidates about the use of AI systems in video interviews for recruitment. This is partly to address the gap in existing discrimination legislation in the United States.

In New York City, there's a council ban will come into effect on automated employment decision tools used in employment without annual bias audits. It's come under some criticism over some of the terms being used, but it's starting to generate a conversation about whether or not these kinds of bias audits should be required of AI systems.

In Spain, legislation linked to the collective bargaining agreement makes it mandatory for platforms to give workers representative information about the algorithmic formula used to determine working conditions. This was the result of originally a court decision that mandated discussions between social partners about platform workers, but it has since expanded to be a more general legislation that has come into effect.

And finally, the city of Maryland, there's a lot of advanced use of facial recognition in interviews for employment without consent. Facial recognition has emerged as a key area of concern, particularly following 2020, and the Black Lives Matter movement where this issue really came even more into the fore with evidence of specific cases that were really quite stunning in terms of their racist outcomes. And so facial recognition in a lot of contexts must be improved, and it's being considered as something that should be banned.

There are questions about these policies. For example, in Illinois, the law is unclear on what kind of explanation is needed to be given to candidates about the AI, or what happens if a candidate refuses to be analysed in this way. And as I mentioned, New York City has already received some criticism, notably to the possibility that vendor-sponsored audits will just rubber stamp their own technology that gets used for for employment decisions.

I think that's probably the summary of the policy responses that we've seen so far. I think a lot of what is going to happen going forward, because these proposals in most places have started generating real conversation, and they've motivated other conversations in other places. I remember the Australian Commissioner speaking about how the EU AI Act was generating conversation about whether Australia should adopt something similar, and we at the OECD are happy to be a place where the different countries are coming to discuss what they're proposing.

Date

September 14, 2022 10:00

to

11:00

Location

Zoom

Register

Sign up to our newsletter

We would love to stay in touch.

Our newsletters and updates let you know what we’ve been up to, what’s in the pipeline, and give you the chance to sign up for our events.

You can unsubscribe at anytime by clicking the link at the bottom of our emails or by emailing data@ifow.org. Read our full privacy policy including how your information will be stored by clicking the link below.