Algorithmic Surveillance and the Political Life of Error

Authors

  • Claudia Aradau King’s College London, GB
  • Tobias Blanke University of Amsterdam, NL

DOI:

https://doi.org/10.5334/jhk.42

Keywords:

algorithms, biometrics, surveillance, ignorance, error

Abstract

Concerns with errors, mistakes, and inaccuracies have shaped political debates about what technologies do, where and how certain technologies can be used, and for which purposes. However, error has received scant attention in the emerging field of ignorance studies. In this article, we analyze how errors have been mobilized in scientific and public controversies over surveillance technologies. In juxtaposing nineteenth-century debates about the errors of biometric technologies for policing and surveillance to current criticisms of facial recognition systems, we trace a transformation of error and its political life. We argue that the modern preoccupation with error and the intellectual habits inculcated to eliminate or tame it have been transformed with machine learning. Machine learning algorithms do not eliminate or tame error, but they optimize it. Therefore, despite reports by digital rights activists, civil liberties organizations, and academics highlighting algorithmic bias and error, facial recognition systems have continued to be rolled out. Drawing on a landmark legal case around facial recognition in the UK, we show how optimizing error also remakes the conditions for a critique of surveillance.

This article is part of a special issue entitled “Histories of Ignorance,” edited by Lukas M. Verburgt and Peter Burke.

Downloads

Published

2021-11-29

Issue

Section

Special Issue