TY - GEN
T1 - Revisiting the poverty of the stimulus
T2 - 40th Annual Meeting of the Cognitive Science Society: Changing Minds, CogSci 2018
AU - McCoy, R. Thomas
AU - Frank, Robert
AU - Linzen, Tal
N1 - Funding Information:
Our experiments were conducted using the resources of the Maryland Advanced Research Computing Center (MARCC). We thank Joe Pater, Paul Smolensky, and the JHU Computational Psycholinguistics group for helpful comments.
Publisher Copyright:
© 2018 Proceedings of the 40th Annual Meeting of the Cognitive Science Society, CogSci 2018. All rights reserved.
PY - 2018
Y1 - 2018
N2 - Syntactic rules in natural language typically need to make reference to hierarchical sentence structure. However, the simple examples that language learners receive are often equally compatible with linear rules. Children consistently ignore these linear explanations and settle instead on the correct hierarchical one. This fact has motivated the proposal that the learner's hypothesis space is constrained to include only hierarchical rules. We examine this proposal using recurrent neural networks (RNNs), which are not constrained in such a way. We simulate the acquisition of question formation, a hierarchical transformation, in a fragment of English. We find that some RNN architectures tend to learn the hierarchical rule, suggesting that hierarchical cues within the language, combined with the implicit architectural biases inherent in certain RNNs, may be sufficient to induce hierarchical generalizations. The likelihood of acquiring the hierarchical generalization increased when the language included an additional cue to hierarchy in the form of subject-verb agreement, underscoring the role of cues to hierarchy in the learner's input.
AB - Syntactic rules in natural language typically need to make reference to hierarchical sentence structure. However, the simple examples that language learners receive are often equally compatible with linear rules. Children consistently ignore these linear explanations and settle instead on the correct hierarchical one. This fact has motivated the proposal that the learner's hypothesis space is constrained to include only hierarchical rules. We examine this proposal using recurrent neural networks (RNNs), which are not constrained in such a way. We simulate the acquisition of question formation, a hierarchical transformation, in a fragment of English. We find that some RNN architectures tend to learn the hierarchical rule, suggesting that hierarchical cues within the language, combined with the implicit architectural biases inherent in certain RNNs, may be sufficient to induce hierarchical generalizations. The likelihood of acquiring the hierarchical generalization increased when the language included an additional cue to hierarchy in the form of subject-verb agreement, underscoring the role of cues to hierarchy in the learner's input.
KW - learning bias
KW - poverty of the stimulus
KW - recurrent neural networks
UR - http://www.scopus.com/inward/record.url?scp=85139553221&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85139553221&partnerID=8YFLogxK
M3 - Conference contribution
AN - SCOPUS:85139553221
T3 - Proceedings of the 40th Annual Meeting of the Cognitive Science Society, CogSci 2018
SP - 2096
EP - 2101
BT - Proceedings of the 40th Annual Meeting of the Cognitive Science Society, CogSci 2018
PB - The Cognitive Science Society
Y2 - 25 July 2018 through 28 July 2018
ER -