Contents
3 found
Order:
  1. Artificial Intelligence for the Internal Democracy of Political Parties.Claudio Novelli, Giuliano Formisano, Prathm Juneja, Sandri Giulia & Luciano Floridi - manuscript
    The article argues that AI can enhance the measurement and implementation of democratic processes within political parties, known as Intra-Party Democracy (IPD). It identifies the limitations of traditional methods for measuring IPD, which often rely on formal parameters, self-reported data, and tools like surveys. Such limitations lead to the collection of partial data, rare updates, and significant demands on resources. To address these issues, the article suggests that specific data management and Machine Learning (ML) techniques, such as natural language processing (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  2. Pseudo Language and the Chinese Room Experiment: Ability to Communicate using a Specific Language without Understanding it.Abolfazl Sabramiz - manuscript
    The ability to communicate in a specific language like Chinese typically indicates that the speaker understands the language. A counterexample to this belief is John Searle’s Chinese room experiment. It has been shown in this experiment that in certain circumstances we can communicate with a Chinese speaker without intuitively acknowledging that the Chinese language is understood in the conversation. In the present paper, we aim to present another counterexample showing that, in certain circumstances, we can communicate using a specific language (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  3. Axe the X in XAI: A Plea for Understandable AI.Andrés Páez - forthcoming - In Juan Manuel Durán & Giorgia Pozzi (eds.), Philosophy of science for machine learning: Core issues and new perspectives. Springer.
    In a recent paper, Erasmus et al. (2021) defend the idea that the ambiguity of the term “explanation” in explainable AI (XAI) can be solved by adopting any of four different extant accounts of explanation in the philosophy of science: the Deductive Nomological, Inductive Statistical, Causal Mechanical, and New Mechanist models. In this chapter, I show that the authors’ claim that these accounts can be applied to deep neural networks as they would to any natural phenomenon is mistaken. I also (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark