Cloze Distillation: Improving Neural Language Models with Human Next-Word Predictions

Tiwalayo N. Eisape, Noga Zaslavsky, Roger P. Levy

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Contemporary autoregressive language models (LMs) trained purely on corpus data have been shown to capture numerous features of human incremental processing. However, past work has also suggested dissociations between corpus probabilities and human next-word predictions. Here we evaluate several state-of-the-art language models for their match to human next-word predictions and to reading time behavior from eye movements. We then propose a novel method for distilling the linguistic information implicit in human linguistic predictions into pre-trained LMs: Cloze Distillation. We apply this method to a baseline neural LM and show potential improvement in reading time prediction and generalization to held-out human cloze data.

Original languageEnglish (US)
Title of host publicationCoNLL 2020 - 24th Conference on Computational Natural Language Learning, Proceedings of the Conference
EditorsRaquel Fernandez, Tal Linzen
PublisherAssociation for Computational Linguistics (ACL)
Pages609-619
Number of pages11
ISBN (Electronic)9781952148637
StatePublished - 2020
Event24th Conference on Computational Natural Language Learning, CoNLL 2020 - Virtual, Online
Duration: Nov 19 2020Nov 20 2020

Publication series

NameCoNLL 2020 - 24th Conference on Computational Natural Language Learning, Proceedings of the Conference

Conference

Conference24th Conference on Computational Natural Language Learning, CoNLL 2020
CityVirtual, Online
Period11/19/2011/20/20

ASJC Scopus subject areas

  • Artificial Intelligence
  • Human-Computer Interaction
  • Linguistics and Language

Fingerprint

Dive into the research topics of 'Cloze Distillation: Improving Neural Language Models with Human Next-Word Predictions'. Together they form a unique fingerprint.

Cite this