Switch to: References

Add citations

You must login to add citations.
  1. The Executioner Paradox: understanding self-referential dilemma in computational systems.Sachit Mahajan - forthcoming - AI and Society:1-8.
    As computational systems burgeon with advancing artificial intelligence (AI), the deterministic frameworks underlying them face novel challenges, especially when interfacing with self-modifying code. The Executioner Paradox, introduced herein, exemplifies such a challenge where a deterministic Executioner Machine (EM) grapples with self-aware and self-modifying code. This unveils a self-referential dilemma, highlighting a gap in current deterministic computational frameworks when faced with self-evolving code. In this article, the Executioner Paradox is proposed, highlighting the nuanced interactions between deterministic decision-making and self-aware code, and (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • We are Building Gods: AI as the Anthropomorphised Authority of the Past.Carl Öhman - 2024 - Minds and Machines 34 (1):1-18.
    This article argues that large language models (LLMs) should be interpreted as a form of gods. In a theological sense, a god is an immortal being that exists beyond time and space. This is clearly nothing like LLMs. In an anthropological sense, however, a god is rather defined as the personified authority of a group through time—a conceptual tool that molds a collective of ancestors into a unified agent or voice. This is exactly what LLMs are. They are products of (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • ChatGPT: deconstructing the debate and moving it forward.Mark Coeckelbergh & David J. Gunkel - forthcoming - AI and Society:1-11.
    Large language models such as ChatGPT enable users to automatically produce text but also raise ethical concerns, for example about authorship and deception. This paper analyses and discusses some key philosophical assumptions in these debates, in particular assumptions about authorship and language and—our focus—the use of the appearance/reality distinction. We show that there are alternative views of what goes on with ChatGPT that do not rely on this distinction. For this purpose, we deploy the two phased approach of deconstruction and (...)
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  • Personhood and AI: Why large language models don’t understand us.Jacob Browning - forthcoming - AI and Society:1-8.
    Recent artificial intelligence advances, especially those of large language models (LLMs), have increasingly shown glimpses of human-like intelligence. This has led to bold claims that these systems are no longer a mere “it” but now a “who,” a kind of person deserving respect. In this paper, I argue that this view depends on a Cartesian account of personhood, on which identifying someone as a person is based on their cognitive sophistication and ability to address common-sense reasoning problems. I contrast this (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Large Language Models: A Historical and Sociocultural Perspective.Eugene Yu Ji - 2024 - Cognitive Science 48 (3):e13430.
    This letter explores the intricate historical and contemporary links between large language models (LLMs) and cognitive science through the lens of information theory, statistical language models, and socioanthropological linguistic theories. The emergence of LLMs highlights the enduring significance of information‐based and statistical learning theories in understanding human communication. These theories, initially proposed in the mid‐20th century, offered a visionary framework for integrating computational science, social sciences, and humanities, which nonetheless was not fully fulfilled at that time. The subsequent development of (...)
    Download  
     
    Export citation  
     
    Bookmark