Why machines do not understand: A response to Søgaard

Archiv (2023)
  Copy   BIBTEX

Abstract

Some defenders of so-called `artificial intelligence' believe that machines can understand language. In particular, Søgaard has argued in his "Understanding models understanding language" (2022) for a thesis of this sort. His idea is that (1) where there is semantics there is also understanding and (2) machines are not only capable of what he calls `inferential semantics', but even that they can (with the help of inputs from sensors) `learn' referential semantics. We show that he goes wrong because he pays insufficient attention to the difference between language as used by humans and the sequences of inert symbols which arise when language is stored on hard drives or in books in libraries.

Author Profiles

Jobst Landgrebe
State University of New York (SUNY)
Barry Smith
University at Buffalo

Analytics

Added to PP
2023-10-04

Downloads
256 (#63,310)

6 months
141 (#25,564)

Historical graph of downloads since first upload
This graph includes both downloads from PhilArchive and clicks on external links on PhilPapers.
How can I increase my downloads?