Lessons for artificial intelligence from the study of natural stupidity

Alexander S. Rich, Todd M. Gureckis

Research output: Contribution to journalReview articlepeer-review

Abstract

Artificial intelligence and machine learning systems are increasingly replacing human decision makers in commercial, healthcare, educational and government contexts. But rather than eliminate human errors and biases, these algorithms have in some cases been found to reproduce or amplify them. We argue that to better understand how and why these biases develop, and when they can be prevented, machine learning researchers should look to the decades-long literature on biases in human learning and decision-making. We examine three broad causes of bias—small and incomplete datasets, learning from the results of your decisions, and biased inference and evaluation processes. For each, findings from the psychology literature are introduced along with connections to the machine learning literature. We argue that rather than viewing machine systems as being universal improvements over human decision makers, policymakers and the public should acknowledge that these system share many of the same limitations that frequently inhibit human judgement, for many of the same reasons.

Original languageEnglish (US)
Pages (from-to)174-180
Number of pages7
JournalNature Machine Intelligence
Volume1
Issue number4
DOIs
StatePublished - Apr 1 2019

ASJC Scopus subject areas

  • Software
  • Human-Computer Interaction
  • Computer Vision and Pattern Recognition
  • Computer Networks and Communications
  • Artificial Intelligence

Fingerprint

Dive into the research topics of 'Lessons for artificial intelligence from the study of natural stupidity'. Together they form a unique fingerprint.

Cite this