Hard Choices in Artificial Intelligence

The criteria for identifying and diagnosing safety risks in complex social contexts remain unclear and contested. In this paper, the authors examine the vagueness in debates about the safety and ethical behavior of AI systems. The paper demonstrates that this vagueness cannot be resolved through mathematical formalism alone, and requires deliberation about the politics of development as well as the context of deployment. The resulting framework of Hard Choices in Artificial Intelligence (HCAI) empowers developers by 1) identifying points of overlap between design decisions and major sociotechnical challenges; 2) motivating the creation of stakeholder feedback channels so that safety issues can be exhaustively addressed. As such, HCAI contributes to a timely debate about the status of AI development in democratic societies, arguing that deliberation should be the goal of AI Safety, not just the procedure by which it is ensured.

Download Here

Chosen by

Abigail Gilbert

Theme

Involvement

Related files

Download Here

Sign up to our newsletter

We would love to stay in touch.

Our newsletters and updates let you know what we’ve been up to, what’s in the pipeline, and give you the chance to sign up for our events.

You can unsubscribe at anytime by clicking the link at the bottom of our emails or by emailing data@ifow.org. Read our full privacy policy including how your information will be stored by clicking the link below.