Predicting and Preferring

Inquiry: An Interdisciplinary Journal of Philosophy (forthcoming)
  Copy   BIBTEX

Abstract

The use of machine learning, or “artificial intelligence” (AI) in medicine is widespread and growing. In this paper, I focus on a specific proposed clinical application of AI: using models to predict incapacitated patients’ treatment preferences. Drawing on results from machine learning, I argue this proposal faces a special moral problem. Machine learning researchers owe us assurance on this front before experimental research can proceed. In my conclusion I connect this concern to broader issues in AI safety.

Author's Profile

Nathaniel Sharadin
University of Hong Kong

Analytics

Added to PP
2023-08-09

Downloads
427 (#40,494)

6 months
187 (#15,506)

Historical graph of downloads since first upload
This graph includes both downloads from PhilArchive and clicks on external links on PhilPapers.
How can I increase my downloads?