Accountability in Artificial Intelligence: What It Is and How It Works

AI and Society 1:1-12 (2023)
  Copy   BIBTEX

Abstract

Accountability is a cornerstone of the governance of artificial intelligence (AI). However, it is often defined too imprecisely because its multifaceted nature and the sociotechnical structure of AI systems imply a variety of values, practices, and measures to which accountability in AI can refer. We address this lack of clarity by defining accountability in terms of answerability, identifying three conditions of possibility (authority recognition, interrogation, and limitation of power), and an architecture of seven features (context, range, agent, forum, standards, process, and implications). We analyse this architecture through four accountability goals (compliance, report, oversight, and enforcement). We argue that these goals are often complementary and that policy-makers emphasise or prioritise some over others depending on the proactive or reactive use of accountability and the missions of AI governance.

Author Profiles

Analytics

Added to PP
2022-08-11

Downloads
1,837 (#5,068)

6 months
794 (#1,487)

Historical graph of downloads since first upload
This graph includes both downloads from PhilArchive and clicks on external links on PhilPapers.
How can I increase my downloads?