Machine learning, justification, and computational reliabilism

Abstract

This article asks the question, ``what is reliable machine learning?'' As I intend to answer it, this is a question about epistemic justification. Reliable machine learning gives justification for believing its output. Current approaches to reliability (e.g., transparency) involve showing the inner workings of an algorithm (functions, variables, etc.) and how they render outputs. We then have justification for believing the output because we know how it was computed. Thus, justification is contingent on what can be shown about the algorithm, its properties, and its behavior. In this paper, I defend computational reliabilism (CR). CR is a computationally-inspired off-shoot of process reliabilism that does not require showing the inner workings of an algorithm. CR credits reliability to machine learning by identifying reliability indicators external to the algorithm (validation methods, knowledge-based integration, etc.). Thus, we have justification for believing the output of machine learning when we have identified the appropriate reliability indicators. CR is advanced as a more suitable epistemology for machine learning. The main goal of this article is to lay the groundwork for CR, how it works, and what we can expect as a justificatory framework for reliable machine learning.

Author's Profile

Juan Manuel DurĂ¡n
Delft University of Technology

Analytics

Added to PP
2023-07-19

Downloads
458 (#37,269)

6 months
226 (#11,099)

Historical graph of downloads since first upload
This graph includes both downloads from PhilArchive and clicks on external links on PhilPapers.
How can I increase my downloads?