In this paper, Selbst examines key considerations and tensions resulting from the highly involved role of the private sector in AI governance, employing an institutional governance perspective.
Scholars and advocates have proposed algorithmic impact assessments (AIAs) as a regulatory strategy for addressing and correcting algorithmic harms. In practice, an impact assessment framework relies on the expertise and information that only the creators of the project have access to. It is therefore inevitable that technology firms will have an amount of practical discretion in the assessment, and willing cooperation from firms is necessary to make the regulation work. But a regime that relies on good-faith partnership from the private sector also has strong potential to be undermined by the incentives and institutional logics of the private sector. This Article argues that for AIA regulation to be effective, it must anticipate the ways that such a regulation will be filtered through the private sector institutional environment.
This Article combines insights from governance, organizational theory, and computer science to analyze how future AIA regulations will be implemented on the ground. Institutional logics, such as liability avoidance and the profit motive, will render the first goal—early consideration of social impacts—difficult in the short term. But AIAs can still be beneficial. The second goal—documentation to support future policy learning—does not require full compliance to be successful, and over time, there is reason to believe that AIAs can be part of a broader cultural shift toward accountability within the technical industry. This will lead to greater buy-in and less need for enforcement of documentation requirements.