Abstract
Machines have achieved a broad and growing set of linguistic competencies, thanks to recent progress in Natural Language Processing (NLP). Psychologists have shown increasing interest in such models, comparing their output to psychological judgments such as similarity, association, priming, and comprehension, raising the question of whether the models could serve as psychological theories. In this article, we compare how humans and machines represent the meaning of words. We argue that contemporary NLP systems are fairly successful models of human word similarity, but they fall short in many other respects. Current models are too strongly linked to the text-based patterns in large corpora, and too weakly linked to the desires, goals, and beliefs that people express through words. Word meanings must also be grounded in perception and action and be capable of flexible combinations in ways that current systems are not.
Original language | English (US) |
---|---|
Pages (from-to) | 401-431 |
Number of pages | 31 |
Journal | Psychological Review |
Volume | 130 |
Issue number | 2 |
DOIs | |
State | Published - Jul 22 2021 |
Keywords
- Natural Language Processing
- concepts
- distributional semantics
- neural networks
- word meaning
ASJC Scopus subject areas
- General Psychology