Blog and news
home camera cctv monitoring monitor smart house video system hand exterior closeup concept
September 1, 2022

Limiting workplace monitoring under the California Workplace Technology Accountability Act

The past decade has seen an emergence of a range of technologies that are applicable to the workplace, from automated employment decision systems used to evaluate candidates applying for jobs, to biometrics and face recognition used to monitor and make predictions about employees’ future behaviour. The pandemic only increased the innovation and adoption of new technologies in the workplace, shifting the way that we work towards an approach that is even more reliant on technology — face-to-face meetings became Zoom meetings; drinks after work became virtual wine tasting parties; and water cooler chats became virtual water cooler Slack channels.

With three times as many people working from home during the pandemic compared to the years before, employers have also increased their reliance on technology for monitoring employees who they could no longer physically observe. Indeed, some employers resorted to monitoring Teams statuses, resulting in some employees using innovative ways to keep their status ‘active’ when away from their computer. Some employers have also been using software to log keystrokes, and even recording employees through their webcams without them knowing.

Due to the potentially highly-sensitive nature of the data that can be collected through this monitoring, it is clearly something that must be done with care. To address some of these concerns, a Workplace Technology Accountability Act has been proposed in California, which seeks to limit the activities and times that employees can be monitored to only those that are objectively proven to be a business necessity.

It would give employees the right to access the data held about them by their employer, as well as the opportunity to request amendments to this data. It would also require an appropriate third party to carry out Data Protection Impact Assessments (DPIAs) of the worker-information systems, and Algorithmic Impact Assessments (AIAs) of automated decision systems.

In this post, I take a look at how the proposed legislation could protect the privacy of workers and promote work-life balance; how it reflects the wider movement towards assurance in the AI ethics ecosystem; and the importance of researching the impact of technology and automation in the workplace.

Protecting the privacy of workers and promoting work-life balance

While monitoring can affect both employees in the physical workplace, and those working remotely, employees in the physical workplace may be more likely to expect monitoring, whether this be through observations from leadership, CCTV, or more technology-enhanced approaches such as the use of sensors to monitor gait or track movement.

However, for those who work from home, the boundaries between home and work life are less clear, especially if the same environment is serving multiple purposes, for example a living room being used as an office during working hours. Consequently, any monitoring of employees working in their homes has the potential to also capture unintended data, such as footage of family members, including children. This blurring of boundaries between work and home-life can also lead to the sense of being ‘always-on’, and working longer hours than they would if they were in the physical workplace. This has known effects on wellbeing, job performance, and turnover intentions — impacting both employees and employers.

The Workplace Technology Accountability Act therefore sets out that employers are prohibited from processing data about employees unless the data were strictly necessary for the following reasons:  

  • Allowing a worker to accomplish an essential job function.
  • Monitoring production processes or quality.
  • Assessment of worker performance.
  • Ensuring compliance with employment, labour or other relevant laws.
  • Protecting the health, safety or security of workers.
  • Administering wages and benefits.

It may be argued that what constitutes an ‘essential job function’ is subject to interpretation. In an attempt to reduce such potential ambiguity, the proposed legislation requires employers to provide objective evidence to support the importance of the activity for job function. This includes data relating to the amount of time workers spend performing each function, and the consequences of the function not being performed. Although subjective evidence, such as job description and employer judgements, can also provide some support for job functions, this evidence alone is not sufficient support.

It also stipulates that the form of monitoring used should be the least invasive form possible, is prohibited in private areas such as bathrooms and limited to the smallest number of workers, collecting as little information as possible.

The restrictions the Act places on monitoring of workers, particularly in their place of residence, should serve to reduce the risks associated with monitoring, but this also remains to be seen.

Reflecting the wider movement towards assurance in the AI ethics ecosystem

Another key requirement of the Act is for impact assessments of worker-information systems (Data Protection Impact Assessments) and automated decision systems (Algorithmic Impact Assessments). These impact assessments are required to be completed by a third party who must assess the risk of the systems, and offer mitigation strategies where appropriate. Since one of the risks that should be assessed through Algorithmic Impact Assessments is bias, this echoes the sentiment of New York City mandating ‘bias audits’ for AI-driven employment decisions.

The requirement also reflects the wider movement towards assurance in the AI ethics ecosystem, where assurance refers to the need to standardise and operationalise principles to facilitate the more responsible use of AI. Indeed, one way to do this is through governance requiring action such as audits or impact assessments. The requirement for impact assessments, therefore, contributes toward these high-risk systems being assured, and used in a more responsible and ethical way.

However, the Workplace Technology Accountability Act does have some pitfalls and potential conflicts with other legislation. For example, a workplace is defined as “a location within California at which or from which a worker performs work for an employer.” According to this definition, employers based in California who hire remote, out-of-state workers will be able to monitor without being subject to the requirements of this legislation, and could be seen as a loophole by bad-faith employers wanting to heavily monitor their employees.

Further, although the stipulation that workers can only request information held about themselves protects the privacy of other workers, this could be problematic for workers who regularly work with others and have data collected about them at an aggregated level. In its current state, the proposed legislation is not clear on whether workers will have access to this information, which could limit the control workers have over their data. Nevertheless, the Workplace Technology Accountability Act is a step in the right direction, and something that would be valuable across jurisdictions, including in the UK.

The importance of researching the impact of technology and automation in the workplace

Hybrid working appears to be here to stay — according to data from the Opinions and Lifestyle Survey more than 8 in 10 workers who had to work from home during the coronavirus pandemic said they planned to do hybrid work when asked in February 2022. And since then, the proportion of workers hybrid working has risen from 13% in early February 2022 to 24% in May 2022.

Therefore, protecting the safety and privacy of workers, particularly those working from home who may not be covered by current practices, is important. And in order to do so effectively, it will be critical that we understand the impact these technologies are having on people, as well as the potential disproportionate impact on different demographic groups and communities. This is why the findings of the Pissarides Review into the Future of Work and Wellbeing will be so important, and useful for informing the development of policy as the world of work continues to change.

As the Act is still at an early stage, we will be monitoring its progress in order to see what we can learn about the process and its effectiveness, and to see how similar regulations may be applied in other jurisdictions, including the UK.

The Institute for the Future of Work is mapping international legislation, including the California Workplace Technology Accountability Act, that are designed to ensure the responsible deployment of AI in the workplace.

The Pissarides Review into the Future of Work and Wellbeing is hosting an all-day conference on Monday 12 September at IET London: Savoy Place to explore the latest perspectives from research, policy and practice on the implications of how automation technologies are transforming work, society and the economy. Find out more and register to attend.

Airlie Hilliard is a senior researcher at Holistic AI, a startup providing algorithmic assurance and building trust around the use of AI. Twitter: @AirlieHilliard

Author

Airlie Hilliard

Share

Sign up to our newsletter

We would love to stay in touch.

Our newsletters and updates let you know what we’ve been up to, what’s in the pipeline, and give you the chance to sign up for our events.

You can unsubscribe at anytime by clicking the link at the bottom of our emails or by emailing data@ifow.org. Read our full privacy policy including how your information will be stored by clicking the link below.