Toolkits

Understanding AI at work

This toolkit is for employers and workers seeking to understand the challenges and opportunities of using algorithmic systems that make or inform decisions about workers.

1. Introduction

Introduction

Developments in technology mean that computer systems can now make decisions about work and determine where, when and how people or machines work. ‘Data driven technologies’ — or ‘algorithmic systems’ — are increasingly shaping access to, terms and conditions and quality of work.

This resource is designed to help employers and workers understand how human choices in the design, development and deployment of algorithmic systems shape impacts on work. There are videos to help explain concepts, and a glossary of the technical terms used.

Algorithmic Systems at Work 

Businesses are increasingly adopting data driven technologies and using algorithmic systems to inform key decisions about work. When algorithmic systems that process workers’ personal data could impact access, conditions or quality of work, they present high levels of risk.

There are three main uses of AI in the workplace:

Job Advertisement 

Algorithms can be used to decide who should be shown new jobs. Much like social media platforms that decide which posts are most relevant for each user, job advertisement algorithms decide which jobs should be shown to which people on the basis of how likely they are to click on, apply for and get these jobs. This can benefit both employers and workers as applicants are shown advertisements that are most personally relevant.

Hiring 

In hiring, algorithmic systems can be used to support businesses to review, select or recommend and automatically reject candidates. Algorithms can be used to screen CVs, analyse video interviews or process the results of psychometric tests. Given large candidate pools, limited time and resources, and a lack of capacity or ability to analyse trends in successful candidates and workers, businesses turn to algorithms for the heavy lifting.

Management 

In the workplace, algorithmic systems can be used to inform and make decisions about workers through the collection and analysis of data at work. Algorithms can play a role in supervision, allocation of work, promotion and reward, disciplinary action, dismissal of workers, or other decisions that a manager might otherwise make. This ranges from recommendations issued to managers, which they can choose to act on, to fully automated processes implemented without a manager.

Once personal data has been processed, data protection law will apply. If you use personal data for any AI application, you should refer to ICO guidance on AI and data protection.

Good Work Algorithmic Impact Assessments

To ensure that algorithmic systems are deployed in a way that promotes ‘good work’, organisations should review the impacts that these technologies might have, as well as their compliance with data protection and other legal requirements. The principle of good work is explained fully below, but it is more than employment. It is work that promotes dignity, autonomy and equality; work that has fair pay and conditions; work where people are properly supported to develop their talents and have a sense of community.

Specific risks incurred from the deployment of AI at work also invite the application of a Good Work Impact Assessment, for which you should refer to IFOW’s Good Work Algorithmic Impact Assessment Guidance. This toolkit compliments the guidance by exploring practical AI applications in the workplace as they present risks. It does not provide legal advice.

Responsible deployment of AI can support good work, but care is needed to achieve this. Through practical examples, videos and case studies, this toolkit aims to help employers and workers understand how to get this right. It does not provide advice on the law, however, you can find the ICO’s guidance on compliance with data protection law here, and IFOW’s toolkit on legal rights and responsibilities applicable to good work here.

This document contains information to better understand how trade-offs made in the design, development and deployment of algorithmic systems at work can impact Good Work principles, which are underpinned by law. It is not designed to help you assess the legality of AI at work, which is the responsibility of employers. UK GDPR is a central legal regime to consider, not least because data protection can provide a gateway to the preservation of other fundamental rights.

Outline of this guidance

This guidance begins by explaining why humans are at the centre of the design and development of data driven technologies like AI.

It then outlines how and why human decisions are vital in the deployment of data driven technologies.

We then have a spotlight on the impacts of data driven technologies on equality, and on the wider risks to good work.

The conclusion suggests some ways forward for employers and workers to put this guidance into practice.

Because these technologies are innovative and complex, some of the language used in this guidance is quite technical. For this reason, we have provided a glossary of terms and some explanation of key concepts. If you are not familiar with ideas in data science, you may want to read this first.

You can find the ICO’s guidance on compliance with the UK GDPR here (the primary piece of data protection law). For further information on AI in particular, see ICO Guidance on AI and data protection.

You can find IFOW’s toolkit on legal rights and responsibilities applicable to good work here.

Supporting files
Main report title that could be long pdf

Human decisions in design and development of data driven technology

Work, models for work and society are increasingly shaped by data driven technologies. While hardware such as cameras, computer chips and robotic arms provide the infrastructure to carry out tasks, software such as machine learning and algorithms provide computational power, making decisions about work and determining where, when and how people or machines carry out these tasks.

The problems that algorithmic systems are meant to solve are often complex social ones with no single right answer. However, all algorithmic systems are designed, developed and deployed by humans. Humans set the rules of processing, determine objectives and choose the datasets on which systems are trained. It is important to recognise and regulate the human choices that shape technology and determine its outcomes. This is commonly known as a ‘sociotechnical’ approach.

The process of designing an algorithmic system is demonstrated below, through a case study of an algorithmic management tool introduced to improve the productivity of workers at an imaginary clothing company called Brill. Interspersed through this case study is a discussion of the key human decision-making points over the lifecycle of an algorithmic system.

Some types of processing activity referenced in the case study are likely to result in non-compliance with data protection obligations, unless an employer takes steps to ensure that a data subject’s rights are protected. If you are reading this as an employer, it is critical to ensure you are compliant with UK data protection law, in addition to other regimes relating to good work.

1. Setting the agenda

First, humans make decisions about how and when to use AI. Money, time, labour and resources are channelled to design, develop and/or procure the right AI solutions for the job.

  • Brill, a retail clothing company, is looking for ways to improve productivity.
  • Management at Brill knows that the company’s competitors use algorithmic management systems.
  • Without reviewing the evidence of productivity gains of these tools, relative to other strategies of work transformation, Brill decides to invest in a new algorithmic management system.

2. Problem formulation and outcome definition 

Humans choose how to define the problem that needs to be solved and what success looks like. This serves as the justification for using AI.

Developers then choose desirable outcomes, or target variables, which algorithms can measure. For example, an algorithmic system could be told to measure worker performance as the target variable, and then use customer feedback data as a means of measuring that outcome.

  • To increase the productivity of workers, Brill decides to measure what good performance means in the company, allowing them to predict who will be good performers in the future to inform promotion and dismissal decisions. This is seen as a strategy to improve productivity.
  • To do this, they has to work out what can represent ‘good’ performance within the system. As the system is data driven, this has to be something quantitative (for which there is numerical data, or for which numerical data can be collected).
  • Brill has historic data on worker gender, age, sales, history of pay rises and customer ratings. If it wants to train the system using experience from their own business, it would need to choose some combination of variables to determine what the system learns from and optimises for.

3. Data collection and use 

Like people, in order to learn, algorithmic systems need data about the world. Based on how the problem and the target variable have been defined, those designing and developing algorithmic systems make choices about data: which datasets to include, where, when and from whom to collect data, and how to process and label it.

A key data protection principle is that data processing should involve checking that the input data is error-free and measures reality accurately. It is important that systems and legal obligations are checked carefully. Labelling and ‘cleaning’ the data allows developers to give appropriate labels to data points and check that the data does not have missing values or errors.  

However, these procedures involve judgements on which variables are valid and useful for a decision-making process.

Businesses make decisions about the kinds of data used to train algorithms. These choices make important differences in workers' lives. Choosing the right data for training is an important step in the design process that has downstream impacts on how accurate, reliable and useful the algorithmic system ultimately will be. Businesses must also have a valid legal basis for data processing and the type of processing under UK GDPR. For more, see ICO guidance on lawful processing.

  • Brill give the management algorithm a large amount of data it has on workers, such as their gender, age, history of pay rises, customer ratings and data from wearable wristbands on the number of steps taken each day.
  • Some of this data is directly related to performance, such as sales, while some is not, such as their gender or home addresses, as saved in the HR system.
    Brill want to learn about patterns of worker behaviour to find out which kinds of workers seem to be good performers, even if the management doesn't understand what causes good performance.
  • The algorithm learns, through discovered patterns in the dataset, that historically, workers who live in certain postcodes, or are recorded by Fitbit to have walked fewer steps each day, are the most likely to receive pay rises. In this way, the algorithm learns that characteristics which a person might not identify as important variables in predicting success seem to be correlated with the desired outcome. These can, however, be proxies for characteristics that indicate success.
  • Brill is happy with how the management algorithm has performed so far in predicting worker performance. The management decide to feed the system more data from a more diverse range of sources so its predictive power increases.
  • Brill instal the algorithmic management system onto the mobile phones of workers as an app. As the app is used to clock in and out of work, this allows more data on when workers are at work and for how many hours. The app also allows for continuous data collection about how they are behaving at work, giving the algorithm more personal information. It tracks workers’ movements at all times, to be used as a proxy for whether the workers regularly exercise and stay healthy.

4. Variable selection and engineering 

As the algorithm learns, the developer can fine-tune the algorithm by altering its parameters and changing the relative importance, or weightings, given to different variables. This process is not always straightforward and the developer may need to make many changes. For example, home postcode may be identified as a poor variable to predict performance. As a result, the developer might discount or reduce the weighting for this variable. Such choices are critical in how the system later performs and should be recorded so that those using it know how it works.

  • Brill designers decide to remove ‘postcode’ as a variable within the model of performance. They set the weighting of ‘sales revenue’ to be two times as significant in making recommendations as ‘steps completed’.

Human decisions in deployment of data driven technology

1. Implementation 

Once an algorithmic system has been designed and developed, choices also need to be made about how the algorithmic model is deployed. In other words, how is it implemented into the broader decision-making system of a company? This will require careful consideration of questions like: 

- How will workers and managers use predictions that an algorithmic system makes?

- When should people follow algorithmic recommendations?

- What should be done if people disagree with the model's recommendation?

- What kind of rules should be in place to decide when to follow algorithmic recommendations, and when to follow human ones?

- Do those using these systems understand which variables are informing recommendations and how?

- Can workers or other people affected by algorithmic decisions challenge these decisions and/or provide feedback?

Algorithmic systems typically involve multiple parties with varying degrees of responsibility, such as the developer, the user, the operator and the stakeholder. The developer is responsible for the technical design of the system and is typically a contracted third party, unless the system is developed in-house. The user is the organisation which typically procures the system for an intended use context, being the employer in the context of the workplace. The operator is the person who uses the system on the ground, such as a hiring manager. Finally, the stakeholder refers to all those affected by the system, such as contracted workers and potential hires.

Decisions supported by algorithmic systems may be automated or semi-automated, for which different considerations apply. Once these decisions are made, the rules and insights are passed down to the operators of the algorithmic systems, who often are not the same as the designers of the system or those who make the overall decisions about the procurement and ultimate purpose of the system. Operators need to be trained and prepared on how to make sense of algorithmic recommendations, how to monitor them for adverse outcomes and how - or whether - to make decisions on these bases.

  • Upper management at Brill decides that although the algorithm system has some limitations, it is more efficient and saves on human resource costs than the old model of management.
  • To cope with potential problems caused by the system and how it may impact the accuracy or fairness of management decisions, Brill informs workers to raise any complaints with human managers if they come across any suspected problems, but to otherwise follow the recommendations of the algorithm at all times.

2. Communication of the model’s predictions and limitations 

How a system's outputs are presented and communicated will impact how people use it. Some systems might display risk scores, such as showing colours or percentages to inform human decisions. The users must understand the limitations of the model and should be able to clearly explain algorithmic predictions and decision-making, particularly to those impacted by the system. Here, legal obligations — including those under UK GDPR —should be seen as the bare minimum, given growing consensus about the importance of sharing as much meaningful information about the purpose, remit, nature and impacts of an algorithmic system used at work.

3. Making runtime adjustments 

As the algorithmic system is used, different problems may emerge over time for different groups of people. These impacts and potential problems should be monitored so that adjustments to the model can be made when needed. Developers, users, impacted groups and key decision-makers should all have the opportunity and responsibility to feed into this process of continuous monitoring and improvement. You can find the best proposed approach for how to govern algorithmic systems at work in this guidance.

Ideally, systems undergo tests, or ‘audits’, before they are deployed. These pre-tests should inform changes to system design to ensure and promote fairness before problems arise.

Listed below are common types of technical audit that assess the performance of a system across a variety of dimensions including accuracy, robustness, privacy, explainability, fairness and bias. Employers should create and keep reports on the quality of a system according to these principles.

- Accuracy: the ability of a model to predict outcomes on the basis of given datasets.

- Robustness: the extent to which a system can be disturbed by changes in the data it processes, or changes in the real-world environment that limit its effectiveness.

- Explainability: the extent to which a model can be explained in human terms and understood by humans.

- Interpretability: the extent to which a system is designed to reveal cause and effect.

- Algorithmic fairness: the extent to which a system treats different groups and individuals in a reasonable and morally acceptable way.

There are often tensions between these dimensions as improving one element can lead to the worsening of another. Developers thus have to make hard choices about what they prioritise. These decisions can impact equality in significant ways. For example, increasing accuracy might have a negative impact on algorithmic fairness.

  • To understand the impact of the new system on different workers in the warehouse, Brill compares the decisions that are made by the algorithm about different groups. When comparing the outcomes given to men and those to women, Brill discovers that the algorithm treats men and women differently. The algorithm is more likely to offer men a pay rise.
  • The developer then excludes all variables which represent gender, explaining to the management at Brill that the system has been 'blinded' to names and recorded gender in the system. It is hoped that this procedure will introduce 'fairness through unawareness' — an approach to fairness that makes systems ignorant of personal demographic information about people, usually their characteristics which are protected in the law, such as race or gender.
  • However, the system still disproportionately recommends more male candidates for pay rises. The developer investigates and finds out that, historically, male workers have made more sales. Given a dataset that has more high-achieving males and only a few high-achieving females, the algorithm is not as good at predicting female success in sales and tends to recommend men more often than women for the role, even if the female worker performs just as well.
  • To address this problem, the developer has to reduce the weighting given to sales revenue. They explain to the managers at Brill that, while this mitigates the issue of unfair bias in terms of gender representation, it has reduced the accuracy of the system.

This example reflects the ways in which algorithmic systems can significantly impact equality.

Impacts on equality

Equality is a leading concern in the design of algorithmic systems. Most commonly, but not always, equality is used to refer to errors which drive different outcomes between groups. This section explores some common equality issues that can arise from workplace applications of algorithmic systems, with a focus on the technical, rather than legal, issues. However, each example invites careful attention to the application of the Equality Act and how it intersects with data protection laws. For a full analysis of this, please see this report.  

Biases can arise in algorithmic systems in many ways. A few of the most common types are:

  1. Historical bias
  2. Sampling bias
  3. Algorithmic-design bias
  4. Human-oversight bias
  5. Deployment bias

Historical bias occurs when the algorithm learns from, and reflects, previously biased human decision-making (for instance, managers in an architectural practice only hiring white applicants) or because of broader societal and environmental processes which lead to different outcomes for different groups (for instance, primarily white individuals applying to architecture roles because of aggregate individual preferences, cultural norms or differential opportunities).1

  • Example: Amazon once used an algorithm for the hiring of software engineers. As this algorithm was trained on the data of existing workers' CVs, who were largely male, the algorithm learned to give lower scores to female candidates if their CVs included words such as "women's college", as it had not been trained on sufficient numbers of examples of successful women.

Historical bias is one of the most significant sources of bias in algorithmic systems which learn from real-world social data. Because history is characterised by inequality, systems which learn from historic data to inform future decision-making can reproduce the inequalities of the past and project them into the future.

The Big Feedback Loop

Sampling bias occurs when the algorithm has been fed data that does not represent the population accurately. This form of bias is more of a technical problem than historical bias and thus can be remedied more readily, such as with more representative datasets.

  • Example: Facial recognition technologies can suffer from sampling biases because they are trained on datasets that include more images of people from a certain race and are therefore better at recognising and classifying this racial group than other racial groups.

Both historical and sampling bias stem from problems with the training data upon which the algorithm was trained. However, problems can also come from other sources, leading to algorithmic design bias. Design bias happens when the programmer intentionally or unintentionally includes bias in the model, such as by defining variables or labelling data in a skewed way, or by having goals which are in themselves biased or unfair.

This is related to the phenomenon of human oversight bias, occurring when the final decision that is made by a human, after consultation with the algorithm, is influenced by the human's conscious or unconscious bias.

Finally, deployment bias occurs when the way in which the algorithm is used and implemented leads to bias. For instance, if the algorithm is embedded into a user interface that is not accessible to some users, then there will be bias in terms of who can make use of and enjoy the benefits of this algorithm.

Bias audits evaluate whether an algorithm has had a harmful impact on different demographic groups or treats them in different ways that could lead to harmful impacts. The outcomes of technical bias audits should be recorded and disclosed to those who use these systems. 

However, in practice, these processes often involve trade-offs and consideration must then be given to which trade-offs have been made and on what basis. Because of these inherent trade-offs, technical audits alone are not sufficient to prevent equality harms arising from algorithmic systems. For this reason, a wider assessment of the impact of a system on good work needs to be completed.

(Please see IFOW's Mind the Gap  report for further discussion about equality impacts, including those covered by the UK’s Equality Act.)

1CDEI bias review, available here.

Wider impacts on Good Work principles

As noted above, the design and deployment of data driven technology can impact equality, a key dimension of good work. However, there is a range of possible ways in which the design and implementation of data driven technology can impact other aspects of job quality. These impacts are often experienced differently by different groups. They are also less suited to established methods of system audit.

Employers looking to be responsible while introducing technology will seek to mitigate any risks of harm and promote good work, over and above legal obligations. For further information, please see IFOW’s Guidance on the Good Work Algorithmic Impact Assessment.

A key mechanism by which work changes when algorithmic systems are used is via the ‘human data cycle’.

Representation (Data gathering)

Data gathering technologies collect data to measure performance. These variables come to represent what counts as work. For instance, the number of transactions completed on a till, the number of emails sent in a day, or the number of deliveries made.

Standard Setting (Direction)

These measurements of work and performance can be used to evaluate performance and predict and schedule tasks.

Intervention (Behaviour change)

The system can shape behaviour through positive or negative reinforcement. Positive reinforcement may be rewards such as increased pay, access to work, or other rewards such as status. Negative reinforcement may be removed or restricted access to work, reduced pay or other forms of disciplinary action.

This section explores both positive and negative risks and impact by reference to the Good Work Charter.

Please note: legal requirements are not explored here, although each principle requires strict compliance with data protection, equality, labour and other laws. For these, please see national and international laws identified in the Good Work Charter. Legal advice will be required to establish correct application of the law in each case.

Access

Algorithmic systems used in recruitment and hiring can automate whom jobs are advertised to and who is hired.  

This can impact people's access to work in various ways. 

Algorithmic systems have the potential to broaden access to work, increasing social mobility and entry to the labour market. However, using algorithmic systems to predict which job adverts are most relevant to different people can entrench stereotypes about groups and exclude people from seeing jobs they might otherwise apply for. As job advertisements can be shown to thousands, if not hundreds of thousands, of applicants, this problem could be magnified many times over.

  • Annie has just left school and is looking for a job. She is not sure what she wants to do and starts browsing online. She finds a few adverts that she likes through a social media platform and applies.
  • Unbeknownst to Annie, an algorithmic system is operating on the platform that has analysed Annie's personal characteristics and her past behaviour on the platform, comparing it to the data it holds on millions of other users. The system has built up a profile of who it believes Annie to be and which adverts it thinks she is likely to click on.
  • The algorithm decides to show Annie a selection of jobs such as care home worker or receptionist. At the same time, Annie's brother, Max, is also looking for jobs online. He is primarily shown roles in bricklaying and driving. The job adverts shown to Max, on average, offer higher salaries than those shown to Annie.

Problems can also arise in the hiring process. A biased algorithmic system might reject candidates who could have been a good fit for the job or nominate candidates who are not.

  • A car company is looking to hire mechanical engineers. The company uses an automated hiring tool which evaluates which characteristics currently make for ‘good’ workers. It looks at existing workers who have been promoted and makes recommendations for whom to hire on this basis.
  • Most of the workers obtained their degrees from Russell Group universities.
  • Because the algorithm only has access to the existing pool of workers, it concludes that good workers are generally those who have received degrees from elite institutions. The algorithm downgrades CVs from those educated at non-elite universities and recommends only those who have graduated from Russell Group universities to the hiring manager, even if these people have similar work experience.

Access to opportunities within work can also be determined by algorithms for existing workers. Algorithmic systems can be used to measure, monitor and track working patterns to predict labour demand, such as by allocating shifts to workers. Many supermarkets use historic data for annual sales to project the number of staff they need in store at any time. When combined with zero-hours contracts, algorithmic systems allow firms to manage labour flexibly. However, algorithmic systems may not have the capacity to make contextual judgments about performance and work allocation, leading to unfair distributions of working hours and income, or access to opportunities for promotion. 

Please note: each of these examples invites particular attention to the application of the Equality Act and data protection principles and rights.

  • Tracy works at a supermarket. The algorithmic system offers more shifts to zero-hours contract workers who accept shifts at short notice when other workers cancel or decline.
  • However, Tracy has childcare responsibilities, a responsibility disproportionately held by women. Due to uncertainties about who she can get to look after the children when she is working, it is difficult for her to accept shifts at short notice.
  • As Tracy has no understanding of how the system works, or how the amount of work she is offered varies from other workers, she struggles to identify this as an issue.
  • Ian works for a platform providing taxi services. The app checks his identity at regular intervals to prevent any potential fraud.
  • To do so, Ian has to take a picture of his face. The app uses a facial recognition system to verify his picture against his ID picture.
  • The app incorrectly flags him as a fraudster and bans him from using the app.
  • He could not challenge the system. This restricted his access to work without affording him the chance to challenge the decision.

Fair pay

Algorithmic systems increasingly determine pay at work. This can be through changes to access, as described above.

Pay can also be impacted through dynamic pricing, where algorithms incorporate different factors (such as demand and supply) to determine what work is deemed to be worthy and how much different kinds of work are worth. While this is not commonplace at present, workers in parts of the gig economy see changes in rates offered for their work on the basis of factors which are not transparent to them. This practice could transfer to established workplaces using algorithmic management.

Alternatively, some systems can rewrite the way rewards for work are given. This may be through workers who achieve more tasks being given automated rewards (such as Amazon vouchers, or higher pay). Which variables are used to calculate this will not always be known to workers. 

  • Sue works as a freelance designer on a crowd-work platform. While she is able to have some control over her hourly rate, the platform gives a recommendation on what she should charge in order to stay competitive among the other designers. This estimate is based on real-time data about her clients’ reviews, current demand and competition. The platform does not take her work location into consideration, meaning that she is competing with other freelancers in areas with a lower cost of living. This presents a race to the bottom. Sue is compelled to use the suggested rate as she is worried she will lose out on work if she charges more.
  • Deniz is an actor who did a voiceover for a commercial a few months ago. His client has now used this recording as training data for a generative AI model to automatically generate text-to-speech voiceovers. Deniz did not give his consent for his voice to be used for this purpose and does not get paid for this additional use of the recording.
  • Mary is a single mother with a full-time job as an accountant. She has noticed she is paid less than her male colleagues when bonuses and gifts are added up. It turns out the wage algorithm uses a proxy for "worker dedication" to calculate bonuses. As Mary is a single mother, she has received a low score on dedication as the system predicts she can't work overtime as often as her male or childless colleagues.

Fair conditions

Everyone should have fair working conditions, set out on terms which are clear, transparent and mutually understood. However, algorithmic systems often introduce and exacerbate information asymmetries, as individuals do not know which variables are being used to evaluate their performance, determine their pay or access, or lead to disciplinary action. These case studies invite particular attention to employment law and UK GDPR.

  • A major platform introduces new ‘fraudulent activity’ algorithms to detect behaviour deemed fraudulent and use this to dismiss workers from the platform. However, the variables which determine whether a driver is deemed to be fraudulent are not disclosed to workers.
  • Habi is dismissed from the platform, receiving a notification that their activity has been found to be fraudulent. The platform does not disclose what aspect of their behaviour was identified as fraudulent activity.

Furthermore, the introduction of algorithmic systems can enable firms to more accurately predict the amount of labour they need at different times. Some businesses, particularly in retail, have changed contract types, supporting more zero-hour contracts and agency workers. This allows for reduced spend on labour, but also on employment protections.

Dignity

Work should promote dignity. However, greater digital surveillance of work can reduce workers' sense that they are trusted to conduct work, as human recognition of their contribution decreases. The use of AI-supported digital surveillance can intrude on privacy and objectify or instrumentalise workers, leaving them with a sense that their basic dignity as human beings has been violated. These case studies invite particular attention to the application of health and safety and protection under UK GDPR.

  • Isaac works as a delivery driver.
  • Over the past year, the company that he works for has implemented a new Delivery Excellence System that collects data on Isaac's performance and assigns him a route each day.
  • Isaac's phone is tracked so the system knows how fast he is driving and how well he is keeping to the set route. The van has built-in sensors that detect the quality of his driving, such as if he takes a different route, changes speed or brakes too suddenly. In doing so, the sensors pick up personal information about Isaac, such as his eye movement and head pose, even though this is not relevant to assessing his performance.
  • The system allocates Isaac a set number of deliveries per day on a particular route, to be completed at an optimal speed. If he becomes delayed, he must make up the time by driving faster between deliveries to meet this target.
  • Isaac is worried about meeting his targets. He expresses concerns to his supervisor, Christina, who responds that the system has been purchased by upper management and that she does not have the seniority or know-how to change the system. She advises Isaac to take on fewer shifts if he is feeling overworked.
  • But Isaac cannot take on fewer shifts as he needs to make enough to support his family. To ensure he can make his deliveries, drive at the right speed, and not stop at the wrong places, he begins to take toilet breaks and eat his meals in the car instead of stopping. He feels he is not recognised as a human being by his employer.

Autonomy

Work should promote autonomy. The key dimensions of autonomy are the ability to exercise choice and control over how work is done, as well as when and where work is done. Algorithmic systems can impact all of these aspects of work by predictively scheduling tasks or shifts, and by increasingly delimiting the decision-making required from workers. However thoughtfully designed and used, these systems could enable increased control over people’s work and surroundings, reducing their sense of autonomy.

These examples illustrate how algorithmic systems can undermine people’s autonomy in the workplace. 

  • Sam, a picker in a warehouse, wears a wristband which responds to data from other sensors about where items are on each shelf. Rather than relying on Sam’s eyesight and familiarity with the warehouse to identify where products are, the wristband vibrates when he is close to a product. Sam is not told about the overall workflow so he cannot use this information to make better choices about his own pattern of work.
  • A manufacturing firm introduces an app to allow managers to check what staff are achieving remotely during COVID-19. The platform can set “required tasks” at the beginning of each day. Rather than trusting workers to wash their hands, a procedure is established setting out that workers should submit a dated photograph of themselves washing their hands when they arrive each morning. When COVID-19 rules are relaxed, the procedure remains in place.
  • Nazmul works in logistics as a picker in a warehouse. He needs to hit an hourly target of a certain amount of parcels. As part of that, his work has deployed a number of information-gathering technologies to monitor performance. He is told that his employer is monitoring the system but, in practice, this means monitoring how fast Nazmul is working. When he is seen chatting with colleagues or is inactive, the system records this. If he exceeds a certain amount of off-task minutes per day, he gets a warning that could potentially lead to dismissal.

Wellbeing

Work should promote physical and mental wellbeing. Automation has long been expected to save us from “dull, dirty and dangerous” work. However, algorithmic systems can impact wellbeing in various ways. The outsourcing of management to algorithms can improve efficiency, but negative outcomes must be assessed too. These may be harmful to workers and unsustainable for employers in the longer term. In particular, since algorithmic systems do not have a contextual awareness of the needs and capacities of each individual, workers can become stressed or overworked, with less ability to communicate with their managers.

For instance, the human data cycle can be used to schedule tasks in ways that increase task density of the day. This can lead to mental and physical fatigue. It can also lead to injury and reduce social aspects of work, changing work as a space for connection. These examples invite particular attention to health and safety law.

  • Mandy is a supermarket worker. She enjoys talking with her older customers, who seem lonely and often try to initiate conversation. This reduces her ‘scanning’ time for items on the till, although repeat custom is increasing at the store. The app which collates and reports on her performance, including Mandy’s scanning speed, flags her for a performance meeting. She is missing her targets, taking too long between her first and last scan. The aspects of work that she thinks contribute value to the business and local community are not included in the optimisation criteria.
  • Ayo works as a consultant in a professional services firm. Her company uses an AI-driven system to help her track her time and her activities. It generates a timesheet that identifies her billable hours that can be charged to different clients. The main metric the system produces is the utilisation rate. In order to perform to the expected standards, Ayo has to hit her daily target of six billable hours out of her eight formally contracted working hours.
  • Because of the pressures of meeting these targets and the automatic time tracking system running at all times, Ayo can either minimise going to the bathroom during work hours, as this would not be counted towards her daily time target, or she can pause the system and subsequently have to work longer and later to make up this time. Over time, she begins to suffer stress. Nobody in the firm is aware of this risk or asks her for her views before or after introducing the system.

Support and participation

Algorithmic systems can support workers to better communicate with each other, management and union representatives, improving morale, a sense of community at work and sharing information, which can improve productivity. However, they can also have negative impacts on support within the workplace, and on protecting and promoting processes, such as unionisation, that allow workers to contribute towards improving working conditions. These case studies invite particular attention to UK GDPR protection and labour and trade union law.

  • IImani works at a company which introduced a new function for workers to socially network with each other. This is helpful because the workforce is distributed and there are no physical spaces for workers to come together and discuss work and working conditions.
  • However, the platform uses an algorithm to detect words such as ‘union’, ‘unionise’ and ‘disappointed’. Those who try to send words from this list are unable to send the message.
  • As such, workers cannot use this space to discuss workplace issues.
  • Amir works in a supermarket warehouse. To streamline processes of worker evaluation, the company introduces two new technologies as part of a new worker management system.
  • First, the company adds facial recognition to the existing security cameras installed throughout the warehouse. The system is trained to recognise the emotions of workers and to learn which kinds of workers are most efficient at moving goods. The system finds that workers that are ‘moderately happy’, but not too happy, seem to move the most goods.
  • Second, the company analyses the chat data of workers on the warehouse chat forum. Once again, the algorithm finds that workers who engage in a moderate amount of chatting seem to move the most goods. Perhaps it is because they stay in touch with fellow colleagues but do not spend too much time online.
  • Workers who are scored highly by the management system receive positive recommendations from the algorithmic management system. The managers trust the system and reward these workers with higher pay and other benefits. Workers who are given consistently low scores do not progress and are eventually dismissed if their scores do not improve.
  • Amir is autistic. While he is highly efficient at most tasks, he does not smile often or interact with his colleagues on the chat forum. To engage in this way, he needs other forms of support and representation. As a result, he finds that he is not promoted and feels increasingly isolated at work. His employer does not recognise a union and he does not have access to a union representative.

Learning

One of the ways AI technologies can be deployed is to ‘augment’ worker skills. This can mean workers are able to use more skills and become more effective at their job. Combined with high quality, relevant training, this can maximise potential. However, algorithmic systems can also be used to narrow the decision-making required from workers which can reduce their discretion. This has negative impacts on autonomy but also reduces the opportunity for people to develop and use their skills and capabilities.

Algorithmic systems can also be used to incorporate and automate aspects of worker knowledge. This can have impacts on pay, which is often taken as equivalent to recognition of use of skill. The use of technology to deliver learning can offer new opportunities and broaden access but negative impacts must be considered. For instance, dependence on algorithmic systems for learning can create barriers to some groups unless the systems are designed with awareness of all access requirements. As a general rule, increasing information and human involvement will improve positive impacts on learning and reduce risks.

  • Caleb is an experienced engineer working in manufacturing. His employer is worried about the ageing manufacturing workforce, and, in particular, about losing the knowledge of experienced workers regarding how the machinery operates. His manager procures a system to infer ‘tacit knowledge’. This collects very detailed information about how Caleb does his work:
  • Caleb inputs instructions about each stage of his work, taking videos and adding pictures.
  • Managers input methods on how to do work, which can then be tested and adapted by workers.
  • Machine learning is used to process data from this range of sources.
  • Once the method of work is encoded in the system, it can be used to train workers with less experience how to do a job. This can upskill new workers, but could equally reduce the requirement for formal education for roles, which is predictive of higher pay. In this sense, a system can both promote learning, and reduce wages or access to work of existing staff depending on human choices about use and human resource management.
  • A university is looking to offer online degrees. As part of that, it tasks its academics to design the curriculum, record their lectures and develop the assessments. Instead of hiring the academics to execute the teaching, teaching assistants on fixed-term contracts are hired to carry out the individualised support, and student papers are partly graded by algorithmic systems. As a result, fewer academics are hired and work is assigned in a piecemeal fashion. This leads to reputational impacts for the university, a diminished research presence and fewer awards of research funding.

Conclusions

Data driven technologies are increasingly finding their way into the workplace. This presents opportunities for companies and workers but also presents risks. A poorly designed, developed or deployed algorithmic system could have significant impacts on job quality and could lead to inequalities in the workplace and in hiring.

However, with careful assessment and testing, these impacts can be minimised and the risks preempted. This will ideally lead to the opportunities of technology being maximised, and the benefits shared amongst all workers, increasing the amount of good work available.

Suggested actions

  1. Watch the explainer videos
  2. Read the glossary to make sure that you understand what is meant by each of the technical terms used
  3. If you are a company about to adopt an algorithmic system, or a worker in a company that has done or is about to do so, initiate a Good Work Algorithmic Impact Assessment.
  4. If you are a business that would like to work with us at IFOW to pilot GWAIAs in your workplace, please do get in touch via team@ifow.org

Data driven technology - glossary and key concepts

Work, models for work, and society are increasingly shaped by data driven technologies. Developments in both hardware and software have allowed technologies to become more interconnected. While hardware such as cameras, computer chips and robotic arms provide the infrastructure to carry out tasks, software uses machine learning and algorithms to provide cognitive power, making decisions about work and determining where, when and how people or machines carry out these tasks.

Data and Datasets 

Data is information that is collected, processed or stored by, and in, digital technologies. 

Each piece of information is a data point. 

Datasets are collections of data, typically related to each other.

Data Driven Technologies  

Data driven technologies are changing the way information is processed, used and controlled in society, creating new risks and impacts in the workplace. Increasingly used in the workplace, they are also transforming access, conditions and quality of work.

There are many different types of technology that you might see in the workplace which collect and process data.  

These range from the more commonplace, such as computers (which can collect information through the camera, microphone, and keyboard), mobile phones and CCTV, to the more novel, such as wearable technologies like Fitbits or heart rate monitors.  

The same piece of data driven technology can be used in multiple ways, making working life better or worse.  

For example, a laptop camera could serve many different purposes in the workplace. It can enable video conferencing meetings which allow workers to work from home and have more flexible hours. It can also enable employers to install surveillance software on laptops without the worker's knowledge or permission, in order to remotely monitor them.  

To take another example, an online chat platform could be used by a company to allow workers to easily find co-workers' contact details and communicate with them. It could also be used by a  company to censor and ban certain forms of communication that it deems to be harmful to company interests, such as by blocking the words "union" and "pay rise".  

Technologies can collect and process information about workers with or without their knowledge,  using data that workers input themselves, such as emails or photographs, or using data that has been collected about the worker, such as through CCTV or wearable technology. 

It is important to be aware of which data gathering and processing technologies are in use in your workplace.

Algorithms  

An algorithm is a sequence of instructions programmed into a computer, designed to complete a task or solve a problem.  

Algorithms are sometimes used as a shorthand for ‘Artificial Intelligence’ (AI), an umbrella term commonly conflated with machine learning, but which is better understood as a scientific field. The term can be used to market a range of technologies.

Artificial Intelligence (AI) 

AI can take many forms and is used in many aspects of our daily lives. For example, AI is commonly used to predict the weather, show us the most relevant websites in a search, and check if an email is spam or not.

Machine learning 

Machine learning (ML) is a branch of AI that learns from collected data how to perform tasks defined by humans.  Machine learning can find patterns in vast sums of data to make recommendations and generate predictions. The combination of machine learning and large datasets can lead to the discovery of complex relationships within data that are not discoverable through human analysis.

Key Concepts in Machine Learning 

Predictions are statistical estimations of what is likely to happen in the future, based on data.  

Example of prediction: One way to predict future job performance is to look at past job performance. A job performance algorithm might look at a worker's history of pay rises, customer reviews, or the amount of revenue a worker has brought to the company, in order to predict how likely they are to perform well in the future. A job performance ML system might seek to learn which of these data points was the best indicator.  

Classification is placing people or things into groups through analysing data. This is often the basis from which predictions are made.  

Example of classification: A system aims to classify what activity a worker is doing (e.g. ‘walking’,  ‘stacking shelves’, ‘waiting in traffic’, ‘taking a break’), based on the motion sensors on their phone or wearable device. 

Models are what is saved after running a machine learning algorithm on training data and represents the rules, numbers, and any other algorithm-specific data structures required to make predictions. This is supported by ‘training’ processes (see below). These rules are usually designed to achieve specific goals as efficiently as possible.  

Example of a model: A recruitment algorithm is based on a model that predicts which candidates are most likely to be good hires. The model could evaluate variables such as whether candidates have good work experience or whether their degree is relevant for the job.

Machine learning can be supervised or unsupervised.

Supervised learning models are trained on datasets that contain labelled data. "Learning" occurs in these models when numerous examples are used to train an algorithm to map input variables (often called features) onto desired outputs (also called target variables or labels). Models identify patterns in the training data to create classifications or make predictions for new, unseen data. A classic example of supervised learning is using variables such as the presence of words like "lottery" or "you won" to predict whether or not an email  should be classified as spam or not spam.  

In unsupervised machine learning, the algorithm is given an unlabelled dataset to discover patterns on its own. For instance, it might look at previous patterns of worker behaviour in order to detect anomalous activity that might indicate fraudulent activity. 

Unless models are designed to reveal which ‘features’ they identify as important, it can be hard to know how conclusions were reached. The extent to which a system is designed to be understandable by a human, in terms of being able to predict what outputs it will produce for a given input, is a system's interpretability. Designers can choose to make systems more or less interpretable, depending on the training methods used and the task at hand. Interpretability is important if organisations want to use algorithms but remain accountable.

What is an Algorithmic System? 

Technology is designed by people to serve specific social and economic purposes. How technology is designed and used and the environment it is used in all determine its outcomes. Equally, technology  can shape people’s behaviour and decision making. In this sense, technology is social (shaped by people) and technical: or ‘socio-technical’. When applied at work, technology can reflect  organisational culture and practices.

An algorithmic system is a system that uses one or more algorithms designed, developed and deployed by humans operating in an institutional context.  

The properties of the software, hardware, and human decisions about how these are designed, trained and deployed at work are all potentially relevant to the fairness of their use at work.

Identifying the problem that an algorithmic system should be used to address, designing the system and making choices about how it will be deployed can all determine the impact the algorithm has on good work.

Credits

Authors

Stephanie Sheir and Gwendolin Barnard

Acknowledgements

Anna Thomas, Dr Abigail Gilbert, Professor David Leslie, Professor Reuben Binns, Professor Phoebe Moore, Kester Brewin

Supported by

ICO Grant Recipient logo and Trust for London logo

Sign up to our newsletter

We would love to stay in touch.

Our newsletters and updates let you know what we’ve been up to, what’s in the pipeline, and give you the chance to sign up for our events.

You can unsubscribe at anytime by clicking the link at the bottom of our emails or by emailing data@ifow.org. Read our full privacy policy including how your information will be stored by clicking the link below.