TY - GEN
T1 - Colorless green recurrent networks dream hierarchically
AU - Gulordava, Kristina
AU - Bojanowski, Piotr
AU - Grave, Edouard
AU - Linzen, Tal
AU - Baroni, Marco
N1 - Publisher Copyright:
© 2018 The Association for Computational Linguistics.
PY - 2018
Y1 - 2018
N2 - Recurrent neural networks (RNNs) have achieved impressive results in a variety of linguistic processing tasks, suggesting that they can induce non-Trivial properties of language. We investigate here to what extent RNNs learn to track abstract hierarchical syntactic structure. We test whether RNNs trained with a generic language modeling objective in four languages (Italian, English, Hebrew, Russian) can predict long-distance number agreement in various constructions. We include in our evaluation nonsensical sentences where RNNs cannot rely on semantic or lexical cues ("The colorless green iiiiiiiiiiiiiiddddddddddddeeeeeeeeeeeeeeaaaaaaaaaaaaaassssssssssssss I ate with the chair sssssssssssssslllllllllllllleeeeeeeeeeeeeeeeeeeeeeeeeeeepppppppppppp furiously"), and, for Italian, we compare model performance to human intuitions. Our language-model-Trained RNNs make reliable predictions about long-distance agreement, and do not lag much behind human performance. We thus bring support to the hypothesis that RNNs are not just shallowpattern extractors, but they also acquire deeper grammatical competence.
AB - Recurrent neural networks (RNNs) have achieved impressive results in a variety of linguistic processing tasks, suggesting that they can induce non-Trivial properties of language. We investigate here to what extent RNNs learn to track abstract hierarchical syntactic structure. We test whether RNNs trained with a generic language modeling objective in four languages (Italian, English, Hebrew, Russian) can predict long-distance number agreement in various constructions. We include in our evaluation nonsensical sentences where RNNs cannot rely on semantic or lexical cues ("The colorless green iiiiiiiiiiiiiiddddddddddddeeeeeeeeeeeeeeaaaaaaaaaaaaaassssssssssssss I ate with the chair sssssssssssssslllllllllllllleeeeeeeeeeeeeeeeeeeeeeeeeeeepppppppppppp furiously"), and, for Italian, we compare model performance to human intuitions. Our language-model-Trained RNNs make reliable predictions about long-distance agreement, and do not lag much behind human performance. We thus bring support to the hypothesis that RNNs are not just shallowpattern extractors, but they also acquire deeper grammatical competence.
UR - http://www.scopus.com/inward/record.url?scp=85060078907&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85060078907&partnerID=8YFLogxK
M3 - Conference contribution
AN - SCOPUS:85060078907
T3 - NAACL HLT 2018 - 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies - Proceedings of the Conference
SP - 1195
EP - 1205
BT - Long Papers
PB - Association for Computational Linguistics (ACL)
T2 - 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL HLT 2018
Y2 - 1 June 2018 through 6 June 2018
ER -