Language, common sense, and the Winograd schema challenge

Jacob Browning, Yann LeCun

Research output: Contribution to journalReview articlepeer-review

Abstract

Since the 1950s, philosophers and AI researchers have held that disambiguating natural language sentences depended on common sense. In 2012, the Winograd Schema Challenge was established to evaluate the common-sense reasoning abilities of a machine by testing its ability to disambiguate sentences. The designers argued only a system capable of “thinking in the full-bodied sense” would be able to pass the test. However, by 2023, the original authors concede the test has been soundly defeated by large language models which still seem to lack common sense of full-bodied thinking. In this paper, we argue that disambiguating sentences only seemed like a good test of common-sense based on a certain picture of the relationship between linguistic comprehension and semantic knowledge—one typically associated with the early computational theory of mind and Symbolic AI. If this picture is rejected, as it is by most LLM researchers, then disambiguation ceases to look like a comprehensive test of common-sense and instead appear only to test linguistic competence. The upshot is that any linguistic test, not just disambiguation, is unlikely to tell us much about common sense or genuine intelligence.

Original languageEnglish (US)
Article number104031
JournalArtificial Intelligence
Volume325
DOIs
StatePublished - Dec 2023

Keywords

  • Artificial intelligence
  • Common-sense
  • Disambiguation
  • Large language models
  • Symbolic AI
  • Winograd schema challenge

ASJC Scopus subject areas

  • Language and Linguistics
  • Linguistics and Language
  • Artificial Intelligence

Fingerprint

Dive into the research topics of 'Language, common sense, and the Winograd schema challenge'. Together they form a unique fingerprint.

Cite this