Switch to: References

Add citations

You must login to add citations.
  1. How Is Perception Tractable?Tyler Brooke-Wilson - forthcoming - The Philosophical Review.
    Perception solves computationally demanding problems at lightning fast speed. It recovers sophisticated representations of the world from degraded inputs, often in a matter of milliseconds. Any theory of perception must be able to explain how this is possible; in other words, it must be able to explain perception's computational tractability. One of the few attempts to move toward such an explanation has been the information encapsulation hypothesis, which posits that perception can be fast because it keeps computational costs low by (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • The best game in town: The reemergence of the language-of-thought hypothesis across the cognitive sciences.Jake Quilty-Dunn, Nicolas Porot & Eric Mandelbaum - 2023 - Behavioral and Brain Sciences 46:e261.
    Mental representations remain the central posits of psychology after many decades of scrutiny. However, there is no consensus about the representational format(s) of biological cognition. This paper provides a survey of evidence from computational cognitive psychology, perceptual psychology, developmental psychology, comparative psychology, and social psychology, and concludes that one type of format that routinely crops up is the language-of-thought (LoT). We outline six core properties of LoTs: (i) discrete constituents; (ii) role-filler independence; (iii) predicate–argument structure; (iv) logical operators; (v) inferential (...)
    Download  
     
    Export citation  
     
    Bookmark   9 citations  
  • Can You Hear Me Now? Sensitive Comparisons of Human and Machine Perception.Michael A. Lepori & Chaz Firestone - 2022 - Cognitive Science 46 (10):e13191.
    Cognitive Science, Volume 46, Issue 10, October 2022.
    Download  
     
    Export citation  
     
    Bookmark  
  • How Is Perception Tractable?Tyler Brooke-Wilson - 2023 - Philosophical Review 132 (2):239-292.
    Perception solves computationally demanding problems at lightning fast speed. It recovers sophisticated representations of the world from degraded inputs, often in a matter of milliseconds. Any theory of perception must be able to explain how this is possible; in other words, it must be able to explain perception’s computational tractability. One of the few attempts to move toward such an explanation is the information encapsulation hypothesis, which posits that perception can be fast because it keeps computational costs low by forgoing (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Disagreement & classification in comparative cognitive science.Alexandria Boyle - forthcoming - Noûs.
    Comparative cognitive science often involves asking questions like ‘Do nonhumans have C?’ where C is a capacity we take humans to have. These questions frequently generate unproductive disagreements, in which one party affirms and the other denies that nonhumans have the relevant capacity on the basis of the same evidence. I argue that these questions can be productively understood as questions about natural kinds: do nonhuman capacities fall into the same natural kinds as our own? Understanding such questions in this (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Direct Human-AI Comparison in the Animal-AI Environment.Konstantinos Voudouris, Matthew Crosby, Benjamin Beyret, José Hernández-Orallo, Murray Shanahan, Marta Halina & Lucy G. Cheke - 2022 - Frontiers in Psychology 13.
    Artificial Intelligence is making rapid and remarkable progress in the development of more sophisticated and powerful systems. However, the acknowledgement of several problems with modern machine learning approaches has prompted a shift in AI benchmarking away from task-oriented testing towards ability-oriented testing, in which AI systems are tested on their capacity to solve certain kinds of novel problems. The Animal-AI Environment is one such benchmark which aims to apply the ability-oriented testing used in comparative psychology to AI systems. Here, we (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Can Deep CNNs Avoid Infinite Regress/Circularity in Content Constitution?Jesse Lopes - 2023 - Minds and Machines 33 (3):507-524.
    The representations of deep convolutional neural networks (CNNs) are formed from generalizing similarities and abstracting from differences in the manner of the empiricist theory of abstraction (Buckner, Synthese 195:5339–5372, 2018). The empiricist theory of abstraction is well understood to entail infinite regress and circularity in content constitution (Husserl, Logical Investigations. Routledge, 2001). This paper argues these entailments hold a fortiori for deep CNNs. Two theses result: deep CNNs require supplementation by Quine’s “apparatus of identity and quantification” in order to (1) (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • (What) Can Deep Learning Contribute to Theoretical Linguistics?Gabe Dupre - 2021 - Minds and Machines 31 (4):617-635.
    Deep learning techniques have revolutionised artificial systems’ performance on myriad tasks, from playing Go to medical diagnosis. Recent developments have extended such successes to natural language processing, an area once deemed beyond such systems’ reach. Despite their different goals, these successes have suggested that such systems may be pertinent to theoretical linguistics. The competence/performance distinction presents a fundamental barrier to such inferences. While DL systems are trained on linguistic performance, linguistic theories are aimed at competence. Such a barrier has traditionally (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations