Abstract
We evaluate whether BERT, a widely used neural network for sentence processing, acquires an inductive bias towards forming structural generalizations through pretraining on raw data. We conduct four experiments testing its preference for structural vs. linear generalizations in different structure-dependent phenomena. We find that BERT makes a structural generalization in 3 out of 4 empirical domains-subject-auxiliary inversion, reflexive binding, and verb tense detection in embedded clauses-but makes a linear generalization when tested on NPI licensing. We argue that these results are the strongest evidence so far from artificial learners supporting the proposition that a structural bias can be acquired from raw data. If this conclusion is correct, it is tentative evidence that some linguistic universals can be acquired by learners without innate biases. However, the precise implications for human language acquisition are unclear, as humans learn language from significantly less data than BERT.
Original language | English (US) |
---|---|
Pages | 1737-1743 |
Number of pages | 7 |
State | Published - 2020 |
Event | 42nd Annual Meeting of the Cognitive Science Society: Developing a Mind: Learning in Humans, Animals, and Machines, CogSci 2020 - Virtual, Online Duration: Jul 29 2020 → Aug 1 2020 |
Conference
Conference | 42nd Annual Meeting of the Cognitive Science Society: Developing a Mind: Learning in Humans, Animals, and Machines, CogSci 2020 |
---|---|
City | Virtual, Online |
Period | 7/29/20 → 8/1/20 |
Keywords
- BERT
- inductive bias
- learnability of grammar
- neural network
- poverty of the stimulus
- self-supervised learning
- structure dependence
ASJC Scopus subject areas
- Artificial Intelligence
- Computer Science Applications
- Human-Computer Interaction
- Cognitive Neuroscience