Institutional Trust in Medicine in the Age of Artificial Intelligence

In David Collins, Mark Alfano & Iris Jovanovic (eds.), The Moral Psychology of Trust. Rowman and Littlefield/Lexington Books: Rowman and Littlefield/Lexington Books (2023)
  Copy   BIBTEX

Abstract

It is easier to talk frankly to a person whom one trusts. It is also easier to agree with a scientist whom one trusts. Even though in both cases the psychological state that underlies the behavior is called ‘trust’, it is controversial whether it is a token of the same psychological type. Trust can serve an affective, epistemic, or other social function, and comes to interact with other psychological states in a variety of ways. The way that the functional role of trust changes across contexts and objects is further complicated when communities and individuals mediate it through technologies, and even more so when that mediation involves artificial intelligence (AI) and machine learning (ML). In this chapter I look at the ways in which trust in institutions, and specifically the medical profession, is affected by the use of AI and ML. There are two key elements of this analysis. The first is a disanalogy between institutional trust in medicine and institutional trust in science (Irzik and Kurtulmus 2021, 2019; Kitcher 2001). I note that as AI and ML become a more prominent part of medicine, trust in a medical institution becomes more like trust in a scientific institution. This is problematic for institutional trust in medicine and the practice of medicine, since institutional trust in science has been undermined by, among other things, the spread of misinformation online and the replication crisis (Romero 2019). There is also a strong analogy between the psychological state of the person who trusts a scientific report or testimony and the psychological state of a patient who trusts individual recommendations made by a medical professional in a clinical setting. In both cases, institutional trust makes it less likely that a mistake or malfeasance will result in reactive attitudes, such as blame or anger, directed at other individual members of that institution. However, it also renders people vulnerable enough to blame the institution itself. This, with time, can erode trust in the institution and naturally leads to policy recommendations that aim to preserve institutional trust. I survey two ways in which that can be done with institutional trust in medicine in the age of AI and ML.

Author's Profile

Michał Klincewicz
Tilburg University

Analytics

Added to PP
2022-06-19

Downloads
335 (#50,951)

6 months
89 (#51,574)

Historical graph of downloads since first upload
This graph includes both downloads from PhilArchive and clicks on external links on PhilPapers.
How can I increase my downloads?