On the Logical Impossibility of Solving the Control Problem

Abstract

In the philosophy of artificial intelligence (AI) we are often warned of machines built with the best possible intentions, killing everyone on the planet and in some cases, everything in our light cone. At the same time, however, we are also told of the utopian worlds that could be created with just a single superintelligent mind. If we’re ever to live in that utopia (or just avoid dystopia) it’s necessary we solve the control problem. The control problem asks how humans would be able to control an AI arbitrarily. Nick Bostrom and other AI researchers have proposed different theoretical solutions to the control problem. In this paper, I will not look at the empirical question of how to solve the control problem. Instead, I will ask if we can solve it at all, a critical assumption most AI researchers have made is that we can. I propose, in fact, that we have a priori grounds for believing it is logically impossible to solve the control problem, since all superintelligent minds are, by definition, uncontrollable.

Author's Profile

Analytics

Added to PP
2020-09-12

Downloads
406 (#43,492)

6 months
90 (#52,014)

Historical graph of downloads since first upload
This graph includes both downloads from PhilArchive and clicks on external links on PhilPapers.
How can I increase my downloads?