Is a subpersonal epistemology possible? Re-evaluating cognitive integration for extended cognition

Dissertation, University of Edinburgh (2021)
  Copy   BIBTEX

Abstract

Virtue reliabilism provides an account of epistemic integration that explains how a reliable-belief forming process can become a knowledge-conducive ability of one’s cognitive character. The univocal view suggests that this epistemic integration can also explain how an external process can extend one’s cognition into the environment. Andy Clark finds a problem with the univocal view. He claims that cognitive extension is a wholly subpersonal affair, whereas the epistemic integration that virtue reliabilism puts forward requires personal-level agential involvement. To adjust the univocal view, Clark recommends a subpersonal epistemic integration account that also paves way for a wholly subpersonal epistemology. Accordingly, an epistemic agent can take responsibility for her reliable belief-forming process by way of entirely subpersonal mechanisms. The aim of this thesis is to argue against a subpersonal epistemology and the need for it. First, I bring into question the conditions that motivate extended cognition: the so-called ‘glue and trust’ requirements and the functional parity principle. Neither of these conditions demands that extension should be understood in entirely subpersonal terms. On the contrary, the glue and trust conditions suggest that agents should personally and actively engage with their external vehicles to extend cognition. Further, I consider an important disparity between the two kinds of integration. Integration that prompts extension can happen immediately, but the integration that makes a reliable belief-forming process knowledge-conducive is almost always a slow process. In light of this and other similar inconsistencies between the two integration accounts, I conclude that a type of univocal view is difficult to defend. And, since that type of univocal view is the main reason for a subpersonal epistemology, the need for it does not arise. Next, I locate Clark’s main motivation for his subpersonal epistemology in his account of the predictive brain. While the predictive brain provides useful insight into how an external process might extend one’s cognition, I show it runs into problems when it tries to account for epistemic integration. Then, I explore how the concept of epistemic defeaters informs epistemic integration. Agents have to meet a specific no-defeater condition to be sensitive to the reliability of their belief-forming processes and to employ them responsibly. In an entirely subpersonal epistemology, defeasibility theory has no means to explain how an agent is rendered sensitive to the reliability of her process. Subpersonal mechanisms can become sensitive to the new process, but that does not allow the agent to employ said process responsibly. Finally, I use an AI case, based on Amazon Alexa, to argue that all accounts of epistemic integration ought to explicitly describe how one’s subpersonal mechanisms link to one’s whole person, even if the link is weak and indirect. Without this connection between one’s subpersonal mechanisms and one’s person, it can become difficult to ascribe beliefs to the agent. This becomes apparent when Alexa leads to ever-increasing layers of cognitive, mental, and agential extension.

Author's Profile

Hadeel Naeem
RWTH Aachen University

Analytics

Added to PP
2021-10-08

Downloads
231 (#67,551)

6 months
102 (#43,757)

Historical graph of downloads since first upload
This graph includes both downloads from PhilArchive and clicks on external links on PhilPapers.
How can I increase my downloads?