We are optimistic about the future of work, but believe that action must be taken to ensure that innovation and social good advance together. The toolkits provided here are to help businesses achieve this by building and protecting good jobs as they look to invest in AI technologies.
The Good Work Charter Toolkit sets out the legal and regulatory frameworks that exist to support the ten fundamental principles in the Good Work Charter that should be protected and promoted whenever new workplace technologies are introduced.
Understanding AI at Work helps employers and workers understand how human choices in the design, development and deployment of AI systems can impact these Good Work principles.
The Good Work Algorithmic Impact Assessment presents an approach for involving workers in the assessment of AI systems so that Good Work principles can be built and sustained.
AI and algorithmic systems are transforming work in profound ways. These data driven technologies have the potential to create new, good jobs and improve access to good quality work, but they can also have adverse impacts.
As we navigate this technological transition, businesses, system engineers and workplace representatives need practical tools to improve understanding of the risks and opportunities that AI presents to make sure that a fairer future of better work is built.
The Lab offers firms and employees guidance and methodologies to identify problems and build practical solutions that will preempt the risks and enhance ‘good work’ when designing, developing and deploying AI systems.
Businesses have real choices when designing, developing and deploying new technologies. With a sharp focus on good work impacts, The Lab can help firms and employees make those better choices.
There is good evidence that businesses that complement AI with investment in human capabilities see better productivity returns.
Our unique range of expert resources aligned with regulatory best practice and the Good Work Charter allows businesses to forefront the voice and experience of stakeholders from all levels of the workforce.
We invite those who want to be trailblazers in responsible AI to work with us. If you share that vision, or would like to explore investing in it, we’d love to be in touch.
"A suite of highly practical, and accessible tools ... this is how we empower business to future-proof responsibly." - Dr Anne-Marie Imafidon MBE, IFOW Trustee and Stemettes CEO
"The IFOW toolkits fill an important gap. They will help leaders be responsible with AI and feed into development of good regulation." - Lord Jim Knight, co-chair of the All Party Parliamentary Group on the Future of Work
"The Good Work Algorithmic Impact Assessment represents an important contribution to advancing responsible innovation in the context of workplace AI." - Dr Florian Ostmann, Head of AI Governance and Regulatory Innovation, The Alan Turing Institute
"The publication of IFOW's Good Work Algorithmic Impact Assessment marks a major policy breakthrough for ensuring that the design, development and deployment of work-related AI technologies is equitable, responsible and trustworthy." - Professor David Leslie, Professor of Ethics, Technology and Society, Queen Mary University.
This guidance sets out an approach for involving workers in the assessment of AI systems that may have significant impacts on ‘Good Work’ principles.