TY - GEN
T1 - Characterizing Verbatim Short-Term Memory in Neural Language Models
AU - Armeni, Kristijan
AU - Honey, Christopher
AU - Linzen, Tal
N1 - Publisher Copyright:
©2022 Association for Computational Linguistics.
PY - 2022
Y1 - 2022
N2 - When a language model is trained to predict natural language sequences, its prediction at each moment depends on a representation of prior context. What kind of information about the prior context can language models retrieve? We tested whether language models could retrieve the exact words that occurred previously in a text. In our paradigm, language models (transformers and an LSTM) processed English text in which a list of nouns occurred twice. We operationalized retrieval as the reduction in surprisal from the first to the second list. We found that the transformers retrieved both the identity and ordering of nouns from the first list. Further, the transformers' retrieval was markedly enhanced when they were trained on a larger corpus and with greater model depth. Lastly, their ability to index prior tokens was dependent on learned attention patterns. In contrast, the LSTM exhibited less precise retrieval, which was limited to list-initial tokens and to short intervening texts. The LSTM's retrieval was not sensitive to the order of nouns and it improved when the list was semantically coherent. We conclude that transformers implemented something akin to a working memory system that could flexibly retrieve individual token representations across arbitrary delays; conversely, the LSTM maintained a coarser and more rapidly-decaying semantic gist of prior tokens, weighted toward the earliest items.
AB - When a language model is trained to predict natural language sequences, its prediction at each moment depends on a representation of prior context. What kind of information about the prior context can language models retrieve? We tested whether language models could retrieve the exact words that occurred previously in a text. In our paradigm, language models (transformers and an LSTM) processed English text in which a list of nouns occurred twice. We operationalized retrieval as the reduction in surprisal from the first to the second list. We found that the transformers retrieved both the identity and ordering of nouns from the first list. Further, the transformers' retrieval was markedly enhanced when they were trained on a larger corpus and with greater model depth. Lastly, their ability to index prior tokens was dependent on learned attention patterns. In contrast, the LSTM exhibited less precise retrieval, which was limited to list-initial tokens and to short intervening texts. The LSTM's retrieval was not sensitive to the order of nouns and it improved when the list was semantically coherent. We conclude that transformers implemented something akin to a working memory system that could flexibly retrieve individual token representations across arbitrary delays; conversely, the LSTM maintained a coarser and more rapidly-decaying semantic gist of prior tokens, weighted toward the earliest items.
UR - http://www.scopus.com/inward/record.url?scp=85151269162&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85151269162&partnerID=8YFLogxK
M3 - Conference contribution
AN - SCOPUS:85151269162
T3 - CoNLL 2022 - 26th Conference on Computational Natural Language Learning, Proceedings of the Conference
SP - 405
EP - 424
BT - CoNLL 2022 - 26th Conference on Computational Natural Language Learning, Proceedings of the Conference
PB - Association for Computational Linguistics (ACL)
T2 - 26th Conference on Computational Natural Language Learning, CoNLL 2022 collocated and co-organized with EMNLP 2022
Y2 - 7 December 2022 through 8 December 2022
ER -