Science Based on Artificial Intelligence Need not Pose a Social Epistemological Problem

Social Epistemology Review and Reply Collective 13 (1) (2024)
  Copy   BIBTEX

Abstract

It has been argued that our currently most satisfactory social epistemology of science can’t account for science that is based on artificial intelligence (AI) because this social epistemology requires trust between scientists that can take full responsibility for the research tools they use, and scientists can’t take full responsibility for the AI tools they use since these systems are epistemically opaque. I think this argument overlooks that much AI-based science can be done without opaque models, and that agents can take full responsibility for the systems they use even if these systems are opaque. Requiring that an agent fully understand how a system works is an untenably strong condition for that agent to take full responsibility for the system and risks absolving AI developers from responsibility for their products. AI-based science need not create trust-related social epistemological problems if we keep in mind that what makes both individual scientists and their use of AI systems trustworthy isn’t full transparency of the internal processing but their adherence to social and institutional norms that ensure that scientific claims can be trusted.

Author's Profile

Uwe Peters
Utrecht University

Analytics

Added to PP
2024-01-28

Downloads
218 (#69,652)

6 months
218 (#11,976)

Historical graph of downloads since first upload
This graph includes both downloads from PhilArchive and clicks on external links on PhilPapers.
How can I increase my downloads?