Switch to: References

Add citations

You must login to add citations.
  1. Feminist Re-Engineering of Religion-Based AI Chatbots.Hazel T. Biana - 2024 - Philosophies 9 (1):20.
    Religion-based AI chatbots serve religious practitioners by bringing them godly wisdom through technology. These bots reply to spiritual and worldly questions by drawing insights or citing verses from the Quran, the Bible, the Bhagavad Gita, the Torah, or other holy books. They answer religious and theological queries by claiming to offer historical contexts and providing guidance and counseling to their users. A criticism of these bots is that they may give inaccurate answers and proliferate bias by propagating homogenized versions of (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • A Credence-based Theory-heavy Approach to Non-human Consciousness.de Weerd Christian - 2024 - Synthese 203 (171).
    Many different methodological approaches have been proposed to infer the presence of consciousness in non-human systems. In this paper, a version of the theory-heavy approach is defended. Theory-heavy approaches rely heavily on considerations from theories of consciousness to make inferences about non-human consciousness. Recently, the theory-heavy approach has been critiqued in the form of Birch's (Noûs, 56(1): 133-153, 2022) dilemma of demandingness and Shevlin's (Mind & Language, 36(2): 297-314, 2021) specificity problem. However, both challenges implicitly assume an inapt characterization of (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Consciousness, Machines, and Moral Status.Henry Shevlin - manuscript
    In light of recent breakneck pace in machine learning, questions about whether near-future artificial systems might be conscious and possess moral status are increasingly pressing. This paper argues that as matters stand these debates lack any clear criteria for resolution via the science of consciousness. Instead, insofar as they are settled at all, it is likely to be via shifts in public attitudes brought about by the increasingly close relationships between humans and AI users. Section 1 of the paper I (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Why do We Need to Employ Exemplars in Moral Education? Insights from Recent Advances in Research on Artificial Intelligence.Hyemin Han - forthcoming - Ethics and Behavior.
    In this paper, I examine why moral exemplars are useful and even necessary in moral education despite several critiques from researchers and educators. To support my point, I review recent AI research demonstrating that exemplar-based learning is superior to rule-based learning in model performance in training neural networks, such as large language models. I particularly focus on why education aiming at promoting the development of multifaceted moral functioning can be done effectively by using exemplars, which is similar to exemplar-based learning (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Does thought require sensory grounding? From pure thinkers to large language models.David J. Chalmers - 2023 - Proceedings and Addresses of the American Philosophical Association 97:22-45.
    Does the capacity to think require the capacity to sense? A lively debate on this topic runs throughout the history of philosophy and now animates discussions of artificial intelligence. Many have argued that AI systems such as large language models cannot think and understand if they lack sensory grounding. I argue that thought does not require sensory grounding: there can be pure thinkers who can think without any sensory capacities. As a result, the absence of sensory grounding does not entail (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Computing Cultures: Historical and Philosophical Perspectives.Juan Luis Gastaldi - 2024 - Minds and Machines 34 (1):1-10.
    Download  
     
    Export citation  
     
    Bookmark  
  • Artificial consciousness: A perspective from the free energy principle.Wanja Wiese - manuscript
    Could a sufficiently detailed computer simulation of consciousness replicate consciousness? In other words, is performing the right computations sufficient for artificial consciousness? Or will there remain a difference between simulating and being a conscious system, because the right computations must be implemented in the right way? From the perspective of Karl Friston's free energy principle, self-organising systems (such as living organisms) share a set of properties that could be realised in artificial systems, but are not instantiated by computers with a (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • How Do You Solve a Problem like DALL-E 2?Kathryn Wojtkiewicz - forthcoming - Journal of Aesthetics and Art Criticism.
    The arrival of image-making generative artificial intelligence (AI) programs has been met with a broad rebuke: to many, it feels inherently wrong to regard images made using generative AI programs as artworks. I am skeptical of this sentiment, and in what follows I aim to demonstrate why. I suspect AI generated images can be considered artworks; more specifically, that generative AI programs are, in many cases, just another tool artists can use to realize their creative intent. I begin with an (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • The Moral Significance of the Phenomenology of Phenomenal Consciousness in Case of Artificial Agents.Kamil Mamak - 2023 - American Journal of Bioethics Neuroscience 14 (2):160-162.
    In a recent article, Joshua Shepherd identifies six problems with attributing moral status to nonhumans on the basis of consciousness (Shepherd 2023). In this commentary, I want to draw out yet ano...
    Download  
     
    Export citation  
     
    Bookmark  
  • How to deal with risks of AI suffering.Leonard Dung - forthcoming - Inquiry: An Interdisciplinary Journal of Philosophy.
    1. 1.1. Suffering is bad. This is why, ceteris paribus, there are strong moral reasons to prevent suffering. Moreover, typically, those moral reasons are stronger when the amount of suffering at st...
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • All too human? Identifying and mitigating ethical risks of Social AI.Henry Shevlin - manuscript
    This paper presents an overview of the risks and benefits of Social AI, understood as conversational AI systems that cater to human social needs like romance, companionship, or entertainment. Section 1 of the paper provides a brief history of conversational AI systems and introduces conceptual distinctions to help distinguish varieties of Social AI and pathways to their deployment. Section 2 of the paper adds further context via a brief discussion of anthropomorphism and its relevance to assessment of human-chatbot relationships. Section (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • AI Wellbeing.Simon Goldstein & Cameron Domenico Kirk-Giannini - manuscript
    Under what conditions would an artificially intelligent system have wellbeing? Despite its obvious bearing on the ethics of human interactions with artificial systems, this question has received little attention. Because all major theories of wellbeing hold that an individual’s welfare level is partially determined by their mental life, we begin by considering whether artificial systems have mental states. We show that a wide range of theories of mental states, when combined with leading theories of wellbeing, predict that certain existing artificial (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation