Switch to: Citations

Add references

You must login to add references.
  1. How the machine ‘thinks’: Understanding opacity in machine learning algorithms.Jenna Burrell - 2016 - Big Data and Society 3 (1):205395171562251.
    This article considers the issue of opacity as a problem for socially consequential mechanisms of classification and ranking, such as spam filters, credit card fraud detection, search engines, news trends, market segmentation and advertising, insurance or loan qualification, and credit scoring. These mechanisms of classification all frequently rely on computational algorithms, and in many cases on machine learning algorithms to do this work. In this article, I draw a distinction between three forms of opacity: opacity as intentional corporate or state (...)
    Download  
     
    Export citation  
     
    Bookmark   188 citations  
  • Transparency in Complex Computational Systems.Kathleen A. Creel - 2020 - Philosophy of Science 87 (4):568-589.
    Scientists depend on complex computational systems that are often ineliminably opaque, to the detriment of our ability to give scientific explanations and detect artifacts. Some philosophers have s...
    Download  
     
    Export citation  
     
    Bookmark   54 citations  
  • How scientific models can explain.Alisa Bokulich - 2011 - Synthese 180 (1):33 - 45.
    Scientific models invariably involve some degree of idealization, abstraction, or nationalization of their target system. Nonetheless, I argue that there are circumstances under which such false models can offer genuine scientific explanations. After reviewing three different proposals in the literature for how models can explain, I shall introduce a more general account of what I call model explanations, which specify the conditions under which models can be counted as explanatory. I shall illustrate this new framework by applying it to the (...)
    Download  
     
    Export citation  
     
    Bookmark   165 citations  
  • On Predicting Recidivism: Epistemic Risk, Tradeoffs, and Values in Machine Learning.Justin B. Biddle - 2022 - Canadian Journal of Philosophy 52 (3):321-341.
    Recent scholarship in philosophy of science and technology has shown that scientific and technological decision making are laden with values, including values of a social, political, and/or ethical character. This paper examines the role of value judgments in the design of machine-learning systems generally and in recidivism-prediction algorithms specifically. Drawing on work on inductive and epistemic risk, the paper argues that ML systems are value laden in ways similar to human decision making, because the development and design of ML systems (...)
    Download  
     
    Export citation  
     
    Bookmark   17 citations  
  • Dissecting explanatory power.Petri Ylikoski & Jaakko Kuorikoski - 2010 - Philosophical Studies 148 (2):201–219.
    Comparisons of rival explanations or theories often involve vague appeals to explanatory power. In this paper, we dissect this metaphor by distinguishing between different dimensions of the goodness of an explanation: non-sensitivity, cognitive salience, precision, factual accuracy and degree of integration. These dimensions are partially independent and often come into conflict. Our main contribution is to go beyond simple stipulation or description by explicating why these factors are taken to be explanatory virtues. We accomplish this by using the contrastive-counterfactual approach (...)
    Download  
     
    Export citation  
     
    Bookmark   117 citations  
  • Understanding from Machine Learning Models.Emily Sullivan - 2022 - British Journal for the Philosophy of Science 73 (1):109-133.
    Simple idealized models seem to provide more understanding than opaque, complex, and hyper-realistic models. However, an increasing number of scientists are going in the opposite direction by utilizing opaque machine learning models to make predictions and draw inferences, suggesting that scientists are opting for models that have less potential for understanding. Are scientists trading understanding for some other epistemic or pragmatic good when they choose a machine learning model? Or are the assumptions behind why minimal models provide understanding misguided? In (...)
    Download  
     
    Export citation  
     
    Bookmark   49 citations  
  • Acceptance, Values, and Inductive Risk.Daniel Steel - 2013 - Philosophy of Science 80 (5):818-828.
    The argument from inductive risk attempts to show that practical and ethical costs of errors should influence standards of evidence for accepting scientific claims. A common objection charges that this argument presupposes a behavioral theory of acceptance that is inappropriate for science. I respond by showing that the argument from inductive risk is supported by a nonbehavioral theory of acceptance developed by Cohen, which defines acceptance in terms of premising. Moreover, I argue that theories designed to explain how acceptance can (...)
    Download  
     
    Export citation  
     
    Bookmark   17 citations  
  • The Scientist Qua Scientist Makes Value Judgments.Richard Rudner - 1953 - Philosophy of Science 20 (1):1-6.
    The question of the relationship of the making of value judgments in a typically ethical sense to the methods and procedures of science has been discussed in the literature at least to that point which e. e. cummings somewhere refers to as “The Mystical Moment of Dullness.” Nevertheless, albeit with some trepidation, I feel that something more may fruitfully be said on the subject.
    Download  
     
    Export citation  
     
    Bookmark   383 citations  
  • The diverse aims of science.Angela Potochnik - 2015 - Studies in History and Philosophy of Science Part A 53:71-80.
    There is increasing attention to the centrality of idealization in science. One common view is that models and other idealized representations are important to science, but that they fall short in one or more ways. On this view, there must be an intermediary step between idealized representation and the traditional aims of science, including truth, explanation, and prediction. Here I develop an alternative interpretation of the relationship between idealized representation and the aims of science. In my view, continuing, widespread idealization (...)
    Download  
     
    Export citation  
     
    Bookmark   42 citations  
  • Model Evaluation: An Adequacy-for-Purpose View.Wendy S. Parker - 2020 - Philosophy of Science 87 (3):457-477.
    According to an adequacy-for-purpose view, models should be assessed with respect to their adequacy or fitness for particular purposes. Such a view has been advocated by scientists and philosophers...
    Download  
     
    Export citation  
     
    Bookmark   48 citations  
  • Values and inductive risk in machine learning modelling: the case of binary classification models.Koray Karaca - 2021 - European Journal for Philosophy of Science 11 (4):1-27.
    I examine the construction and evaluation of machine learning binary classification models. These models are increasingly used for societal applications such as classifying patients into two categories according to the presence or absence of a certain disease like cancer and heart disease. I argue that the construction of ML classification models involves an optimisation process aiming at the minimization of the inductive risk associated with the intended uses of these models. I also argue that the construction of these models is (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Inductive risk and values in science.Heather Douglas - 2000 - Philosophy of Science 67 (4):559-579.
    Although epistemic values have become widely accepted as part of scientific reasoning, non-epistemic values have been largely relegated to the "external" parts of science (the selection of hypotheses, restrictions on methodologies, and the use of scientific technologies). I argue that because of inductive risk, or the risk of error, non-epistemic values are required in science wherever non-epistemic consequences of error should be considered. I use examples from dioxin studies to illustrate how non-epistemic consequences of error can and should be considered (...)
    Download  
     
    Export citation  
     
    Bookmark   363 citations  
  • Theory choice, non-epistemic values, and machine learning.Ravit Dotan - 2020 - Synthese (11):1-21.
    I use a theorem from machine learning, called the “No Free Lunch” theorem to support the claim that non-epistemic values are essential to theory choice. I argue that NFL entails that predictive accuracy is insufficient to favor a given theory over others, and that NFL challenges our ability to give a purely epistemic justification for using other traditional epistemic virtues in theory choice. In addition, I argue that the natural way to overcome NFL’s challenge is to use non-epistemic values. If (...)
    Download  
     
    Export citation  
     
    Bookmark   8 citations  
  • Race.Michael James - 2008 - Stanford Encyclopedia of Philosophy.
    Download  
     
    Export citation  
     
    Bookmark   18 citations