This report from the Ada Lovelace Institute, AI Now Institute, and Open Government Partnership reflects on current models of accountability for AI.
As governments are increasingly turning to algorithms to support decision-making for public services, there is growing evidence that suggests that these systems can cause harm and frequently lack transparency in their implementation. Reformers in and outside of government are turning to regulatory and policy tools, hoping to ensure algorithmic accountability across countries and contexts. These responses are emergent and shifting rapidly, and they vary widely in form and substance – from legally binding commitments, to high-level principles and voluntary guidelines.
This report presents evidence on the use of algorithmic accountability policies in different contexts from the perspective of those implementing these tools, and explores the limits of legal and policy mechanisms in ensuring safe and accountable algorithmic systems.