Switch to: References

Add citations

You must login to add citations.
  1. Algorithmic Accountability and Public Reason.Reuben Binns - 2018 - Philosophy and Technology 31 (4):543-556.
    The ever-increasing application of algorithms to decision-making in a range of social contexts has prompted demands for algorithmic accountability. Accountable decision-makers must provide their decision-subjects with justifications for their automated system’s outputs, but what kinds of broader principles should we expect such justifications to appeal to? Drawing from political philosophy, I present an account of algorithmic accountability in terms of the democratic ideal of ‘public reason’. I argue that situating demands for algorithmic accountability within this justificatory framework enables us to (...)
    Download  
     
    Export citation  
     
    Bookmark   49 citations  
  • Machines learning values.Steve Petersen - 2020 - In S. Matthew Liao (ed.), Ethics of Artificial Intelligence. New York, USA: Oxford University Press.
    Whether it would take one decade or several centuries, many agree that it is possible to create a *superintelligence*---an artificial intelligence with a godlike ability to achieve its goals. And many who have reflected carefully on this fact agree that our best hope for a "friendly" superintelligence is to design it to *learn* values like ours, since our values are too complex to program or hardwire explicitly. But the value learning approach to AI safety faces three particularly philosophical puzzles: first, (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Artificial virtuous agents: from theory to machine implementation.Jakob Stenseke - 2021 - AI and Society:1-20.
    Virtue ethics has many times been suggested as a promising recipe for the construction of artificial moral agents due to its emphasis on moral character and learning. However, given the complex nature of the theory, hardly any work has de facto attempted to implement the core tenets of virtue ethics in moral machines. The main goal of this paper is to demonstrate how virtue ethics can be taken all the way from theory to machine implementation. To achieve this goal, we (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • On the Logical Impossibility of Solving the Control Problem.Caleb Rudnick - manuscript
    In the philosophy of artificial intelligence (AI) we are often warned of machines built with the best possible intentions, killing everyone on the planet and in some cases, everything in our light cone. At the same time, however, we are also told of the utopian worlds that could be created with just a single superintelligent mind. If we’re ever to live in that utopia (or just avoid dystopia) it’s necessary we solve the control problem. The control problem asks how humans (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Applying a principle of explicability to AI research in Africa: should we do it?Mary Carman & Benjamin Rosman - 2020 - Ethics and Information Technology 23 (2):107-117.
    Developing and implementing artificial intelligence (AI) systems in an ethical manner faces several challenges specific to the kind of technology at hand, including ensuring that decision-making systems making use of machine learning are just, fair, and intelligible, and are aligned with our human values. Given that values vary across cultures, an additional ethical challenge is to ensure that these AI systems are not developed according to some unquestioned but questionable assumption of universal norms but are in fact compatible with the (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  • Computing machinery and morality.Blay Whitby - 2008 - AI and Society 22 (4):551-563.
    Artificial Intelligence (AI) is a technology widely used to support human decision-making. Current areas of application include financial services, engineering, and management. A number of attempts to introduce AI decision support systems into areas which more obviously include moral judgement have been made. These include systems that give advice on patient care, on social benefit entitlement, and even ethical advice for medical professionals. Responding to these developments raises a complex set of moral questions. This paper proposes a clearer replacement question (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  • Thinking Inside the Box: Controlling and Using an Oracle AI.Stuart Armstrong, Anders Sandberg & Nick Bostrom - 2012 - Minds and Machines 22 (4):299-324.
    There is no strong reason to believe that human-level intelligence represents an upper limit of the capacity of artificial intelligence, should it be realized. This poses serious safety issues, since a superintelligent system would have great power to direct the future according to its possibly flawed motivation system. Solving this issue in general has proven to be considerably harder than expected. This paper looks at one particular approach, Oracle AI. An Oracle AI is an AI that does not act in (...)
    Download  
     
    Export citation  
     
    Bookmark   17 citations  
  • Ethics of Artificial Intelligence and Robotics.Vincent C. Müller - 2012 - In Peter Adamson (ed.), Stanford Encyclopedia of Philosophy. Stanford Encyclopedia of Philosophy. pp. 1-70.
    Artificial intelligence (AI) and robotics are digital technologies that will have significant impact on the development of humanity in the near future. They have raised fundamental questions about what we should do with these systems, what the systems themselves should do, what risks they involve, and how we can control these. - After the Introduction to the field (§1), the main themes (§2) of this article are: Ethical issues that arise with AI systems as objects, i.e., tools made and used (...)
    Download  
     
    Export citation  
     
    Bookmark   32 citations  
  • The problem of superintelligence: political, not technological.Wolfhart Totschnig - 2019 - AI and Society 34 (4):907-920.
    The thinkers who have reflected on the problem of a coming superintelligence have generally seen the issue as a technological problem, a problem of how to control what the superintelligence will do. I argue that this approach is probably mistaken because it is based on questionable assumptions about the behavior of intelligent agents and, moreover, potentially counterproductive because it might, in the end, bring about the existential catastrophe that it is meant to prevent. I contend that the problem posed by (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  • Fully Autonomous AI.Wolfhart Totschnig - 2020 - Science and Engineering Ethics 26 (5):2473-2485.
    In the fields of artificial intelligence and robotics, the term “autonomy” is generally used to mean the capacity of an artificial agent to operate independently of human guidance. It is thereby assumed that the agent has a fixed goal or “utility function” with respect to which the appropriateness of its actions will be evaluated. From a philosophical perspective, this notion of autonomy seems oddly weak. For, in philosophy, the term is generally used to refer to a stronger capacity, namely the (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • God-like robots: the semantic overlap between representation of divine and artificial entities.Nicolas Spatola & Karolina Urbanska - 2020 - AI and Society 35 (2):329-341.
    Artificial intelligence and robots may progressively take a more and more prominent place in our daily environment. Interestingly, in the study of how humans perceive these artificial entities, science has mainly taken an anthropocentric perspective (i.e., how distant from humans are these agents). Considering people’s fears and expectations from robots and artificial intelligence, they tend to be simultaneously afraid and allured to them, much as they would be to the conceptualisations related to the divine entities (e.g., gods). In two experiments, (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • “Blessed by the algorithm”: Theistic conceptions of artificial intelligence in online discourse.Beth Singler - 2020 - AI and Society 35 (4):945-955.
    “My first long haul flight that didn’t fill up and an empty row for me. I have been blessed by the algorithm ”. The phrase ‘blessed by the algorithm’ expresses the feeling of having been fortunate in what appears on your feed on various social media platforms, or in the success or virality of your content as a creator, or in what gig economy jobs you are offered. However, we can also place it within wider public discourse employing theistic conceptions (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • From machine ethics to computational ethics.Samuel T. Segun - 2021 - AI and Society 36 (1):263-276.
    Research into the ethics of artificial intelligence is often categorized into two subareas—robot ethics and machine ethics. Many of the definitions and classifications of the subject matter of these subfields, as found in the literature, are conflated, which I seek to rectify. In this essay, I infer that using the term ‘machine ethics’ is too broad and glosses over issues that the term computational ethics best describes. I show that the subject of inquiry of computational ethics is of great value (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Autonomous Machines, Moral Judgment, and Acting for the Right Reasons.Duncan Purves, Ryan Jenkins & Bradley J. Strawser - 2015 - Ethical Theory and Moral Practice 18 (4):851-872.
    We propose that the prevalent moral aversion to AWS is supported by a pair of compelling objections. First, we argue that even a sophisticated robot is not the kind of thing that is capable of replicating human moral judgment. This conclusion follows if human moral judgment is not codifiable, i.e., it cannot be captured by a list of rules. Moral judgment requires either the ability to engage in wide reflective equilibrium, the ability to perceive certain facts as moral considerations, moral (...)
    Download  
     
    Export citation  
     
    Bookmark   40 citations  
  • Artificial Intelligence and Medical Humanities.Kirsten Ostherr - 2020 - Journal of Medical Humanities 43 (2):211-232.
    The use of artificial intelligence in healthcare has led to debates about the role of human clinicians in the increasingly technological contexts of medicine. Some researchers have argued that AI will augment the capacities of physicians and increase their availability to provide empathy and other uniquely human forms of care to their patients. The human vulnerabilities experienced in the healthcare context raise the stakes of new technologies such as AI, and the human dimensions of AI in healthcare have particular significance (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Robotification & ethical cleansing.Marco Nørskov - 2022 - AI and Society 37 (2):425-441.
    Robotics is currently not only a cutting-edge research area, but is potentially disruptive to all domains of our lives—for better and worse. While legislation is struggling to keep pace with the development of these new artifacts, our intellectual limitations and physical laws seem to present the only hard demarcation lines, when it comes to state-of-the-art R&D. To better understand the possible implications, the paper at hand critically investigates underlying processes and structures of robotics in the context of Heidegger’s and Nishitani’s (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Self-improving AI: an Analysis. [REVIEW]John Storrs Hall - 2007 - Minds and Machines 17 (3):249-259.
    Self-improvement was one of the aspects of AI proposed for study in the 1956 Dartmouth conference. Turing proposed a “child machine” which could be taught in the human manner to attain adult human-level intelligence. In latter days, the contention that an AI system could be built to learn and improve itself indefinitely has acquired the label of the bootstrap fallacy. Attempts in AI to implement such a system have met with consistent failure for half a century. Technological optimists, however, have (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Discourse analysis of academic debate of ethics for AGI.Ross Graham - 2022 - AI and Society 37 (4):1519-1532.
    Artificial general intelligence is a greatly anticipated technology with non-trivial existential risks, defined as machine intelligence with competence as great/greater than humans. To date, social scientists have dedicated little effort to the ethics of AGI or AGI researchers. This paper employs inductive discourse analysis of the academic literature of two intellectual groups writing on the ethics of AGI—applied and/or ‘basic’ scientific disciplines henceforth referred to as technicians (e.g., computer science, electrical engineering, physics), and philosophy-adjacent disciplines henceforth referred to as PADs (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Echoes of myth and magic in the language of Artificial Intelligence.Roberto Musa Giuliano - 2020 - AI and Society 35 (4):1009-1024.
    To a greater extent than in other technical domains, research and progress in Artificial Intelligence has always been entwined with the fictional. Its language echoes strongly with other forms of cultural narratives, such as fairytales, myth and religion. In this essay we present varied examples that illustrate how these analogies have guided not only readings of the AI enterprise by commentators outside the community but also inspired AI researchers themselves. Owing to their influence, we pay particular attention to the similarities (...)
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  • Artificial Moral Agents: Moral Mentors or Sensible Tools?Fabio Fossa - 2018 - Ethics and Information Technology (2):1-12.
    The aim of this paper is to offer an analysis of the notion of artificial moral agent (AMA) and of its impact on human beings’ self-understanding as moral agents. Firstly, I introduce the topic by presenting what I call the Continuity Approach. Its main claim holds that AMAs and human moral agents exhibit no significant qualitative difference and, therefore, should be considered homogeneous entities. Secondly, I focus on the consequences this approach leads to. In order to do this I take (...)
    Download  
     
    Export citation  
     
    Bookmark   11 citations  
  • Existential risks: analyzing human extinction scenarios and related hazards.Nick Bostrom - 2002 - J Evol Technol 9 (1).
    Because of accelerating technological progress, humankind may be rapidly approaching a critical phase in its career. In addition to well-known threats such as nuclear holocaust, the propects of radically transforming technologies like nanotech systems and machine intelligence present us with unprecedented opportunities and risks. Our future, and whether we will have a future at all, may well be determined by how we deal with these challenges. In the case of radically transforming technologies, a better understanding of the transition dynamics from (...)
    Download  
     
    Export citation  
     
    Bookmark   79 citations  
  • The Implementation of Ethical Decision Procedures in Autonomous Systems : the Case of the Autonomous Vehicle.Katherine Evans - 2021 - Dissertation, Sorbonne Université
    The ethics of emerging forms of artificial intelligence has become a prolific subject in both academic and public spheres. A great deal of these concerns flow from the need to ensure that these technologies do not cause harm—physical, emotional or otherwise—to the human agents with which they will interact. In the literature, this challenge has been met with the creation of artificial moral agents: embodied or virtual forms of artificial intelligence whose decision procedures are constrained by explicit normative principles, requiring (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • The singularity: A philosophical analysis.David J. Chalmers - 2010 - Journal of Consciousness Studies 17 (9-10):9 - 10.
    What happens when machines become more intelligent than humans? One view is that this event will be followed by an explosion to ever-greater levels of intelligence, as each generation of machines creates more intelligent machines in turn. This intelligence explosion is now often known as the “singularity”. The basic argument here was set out by the statistician I.J. Good in his 1965 article “Speculations Concerning the First Ultraintelligent Machine”: Let an ultraintelligent machine be defined as a machine that can far (...)
    Download  
     
    Export citation  
     
    Bookmark   119 citations  
  • Measuring Intelligence and Growth Rate: Variations on Hibbard's Intelligence Measure.Samuel Alexander & Bill Hibbard - 2021 - Journal of Artificial General Intelligence 12 (1):1-25.
    In 2011, Hibbard suggested an intelligence measure for agents who compete in an adversarial sequence prediction game. We argue that Hibbard’s idea should actually be considered as two separate ideas: first, that the intelligence of such agents can be measured based on the growth rates of the runtimes of the competitors that they defeat; and second, one specific (somewhat arbitrary) method for measuring said growth rates. Whereas Hibbard’s intelligence measure is based on the latter growth-rate-measuring method, we survey other methods (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Leakproofing the Singularity.Roman V. Yampolskiy - 2012 - Journal of Consciousness Studies 19 (1-2):194-214.
    This paper attempts to formalize and to address the ‘leakproofing’ of the Singularity problem presented by David Chalmers. The paper begins with the definition of the Artificial Intelligence Confinement Problem. After analysis of existing solutions and their shortcomings, a protocol is proposed aimed at making a more secure confinement environment which might delay potential negative effect from the technological singularity while allowing humanity to benefit from the superintelligence.
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Superintelligence as superethical.Steve Petersen - 2017 - In Patrick Lin, Keith Abney & Ryan Jenkins (eds.), Robot Ethics 2. 0: New Challenges in Philosophy, Law, and Society. New York, USA: Oxford University Press. pp. 322-337.
    Nick Bostrom's book *Superintelligence* outlines a frightening but realistic scenario for human extinction: true artificial intelligence is likely to bootstrap itself into superintelligence, and thereby become ideally effective at achieving its goals. Human-friendly goals seem too abstract to be pre-programmed with any confidence, and if those goals are *not* explicitly favorable toward humans, the superintelligence will extinguish us---not through any malice, but simply because it will want our resources for its own purposes. In response I argue that things might not (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  • Thinking inside the box: Using and controlling an oracle AI.Stuart Armstrong, Anders Sandberg & Nick Bostrom - forthcoming - Minds and Machines.
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • The Singularity Beyond Philosophy of Mind.Eric Steinhart - 2012 - Journal of Consciousness Studies 19 (7-8):131-137.
    Thought about the singularity intersects the philosophy of mind in deep and important ways. However, thought about the singularity also intersects many other areas of philosophy, including the history of philosophy, metaphysics, the philosophy of science, and the philosophy of religion. I point to some of those intersections. Singularitarian thought suggests that many of the objects and processes that once lay in the domain of revealed religion now lie in the domain of pure computer science.
    Download  
     
    Export citation  
     
    Bookmark