Abstract
Contemporary autoregressive language models (LMs) trained purely on corpus data have been shown to capture numerous features of human incremental processing. However, past work has also suggested dissociations between corpus probabilities and human next-word predictions. Here we evaluate several state-of-the-art language models for their match to human next-word predictions and to reading time behavior from eye movements. We then propose a novel method for distilling the linguistic information implicit in human linguistic predictions into pre-trained LMs: Cloze Distillation. We apply this method to a baseline neural LM and show potential improvement in reading time prediction and generalization to held-out human cloze data.
Original language | English (US) |
---|---|
Title of host publication | CoNLL 2020 - 24th Conference on Computational Natural Language Learning, Proceedings of the Conference |
Editors | Raquel Fernandez, Tal Linzen |
Publisher | Association for Computational Linguistics (ACL) |
Pages | 609-619 |
Number of pages | 11 |
ISBN (Electronic) | 9781952148637 |
State | Published - 2020 |
Event | 24th Conference on Computational Natural Language Learning, CoNLL 2020 - Virtual, Online Duration: Nov 19 2020 → Nov 20 2020 |
Publication series
Name | CoNLL 2020 - 24th Conference on Computational Natural Language Learning, Proceedings of the Conference |
---|
Conference
Conference | 24th Conference on Computational Natural Language Learning, CoNLL 2020 |
---|---|
City | Virtual, Online |
Period | 11/19/20 → 11/20/20 |
ASJC Scopus subject areas
- Artificial Intelligence
- Human-Computer Interaction
- Linguistics and Language