Algorithmic Political Bias Can Reduce Political Polarization

Philosophy and Technology 35 (3):1-7 (2022)
  Copy   BIBTEX

Abstract

Does algorithmic political bias contribute to an entrenchment and polarization of political positions? Franke argues that it may do so because the bias involves classifications of people as liberals, conservatives, etc., and individuals often conform to the ways in which they are classified. I provide a novel example of this phenomenon in human–computer interactions and introduce a social psychological mechanism that has been overlooked in this context but should be experimentally explored. Furthermore, while Franke proposes that algorithmic political classifications entrench political identities, I contend that they may often produce the opposite result. They can lead people to change in ways that disconfirm the classifications. Consequently and counterintuitively, algorithmic political bias can in fact decrease political entrenchment and polarization.

Author's Profile

Uwe Peters
Utrecht University

Analytics

Added to PP
2022-08-23

Downloads
286 (#57,148)

6 months
138 (#25,427)

Historical graph of downloads since first upload
This graph includes both downloads from PhilArchive and clicks on external links on PhilPapers.
How can I increase my downloads?