TY - GEN
T1 - Causal Analysis of Syntactic Agreement Neurons in Multilingual Language Models
AU - Mueller, Aaron
AU - Xia, Yu
AU - Linzen, Tal
N1 - Funding Information:
This material is based upon work supported by the National Science Foundation (NSF) under Grant No. BCS-2114505. Aaron Mueller was supported by a National Science Foundation Graduate Research Fellowship (Grant #1746891). This work was also supported by supported in part through the NYU IT High Performance Computing resources, services, and staff expertise.
Publisher Copyright:
©2022 Association for Computational Linguistics.
PY - 2022
Y1 - 2022
N2 - Structural probing work has found evidence for latent syntactic information in pre-trained language models. However, much of this analysis has focused on monolingual models, and analyses of multilingual models have employed correlational methods that are confounded by the choice of probing tasks. In this study, we causally probe multilingual language models (XGLM and multilingual BERT) as well as monolingual BERT-based models across various languages; we do this by performing counterfactual perturbations on neuron activations and observing the effect on models' subjectverb agreement probabilities. We observe where in the model and to what extent syntactic agreement is encoded in each language. We find significant neuron overlap across languages in autoregressive multilingual language models, but not masked language models. We also find two distinct layer-wise effect patterns and two distinct sets of neurons used for syntactic agreement, depending on whether the subject and verb are separated by other tokens. Finally, we find that behavioral analyses of language models are likely underestimating how sensitive masked language models are to syntactic information.
AB - Structural probing work has found evidence for latent syntactic information in pre-trained language models. However, much of this analysis has focused on monolingual models, and analyses of multilingual models have employed correlational methods that are confounded by the choice of probing tasks. In this study, we causally probe multilingual language models (XGLM and multilingual BERT) as well as monolingual BERT-based models across various languages; we do this by performing counterfactual perturbations on neuron activations and observing the effect on models' subjectverb agreement probabilities. We observe where in the model and to what extent syntactic agreement is encoded in each language. We find significant neuron overlap across languages in autoregressive multilingual language models, but not masked language models. We also find two distinct layer-wise effect patterns and two distinct sets of neurons used for syntactic agreement, depending on whether the subject and verb are separated by other tokens. Finally, we find that behavioral analyses of language models are likely underestimating how sensitive masked language models are to syntactic information.
UR - http://www.scopus.com/inward/record.url?scp=85153326203&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85153326203&partnerID=8YFLogxK
M3 - Conference contribution
AN - SCOPUS:85153326203
T3 - CoNLL 2022 - 26th Conference on Computational Natural Language Learning, Proceedings of the Conference
SP - 95
EP - 109
BT - CoNLL 2022 - 26th Conference on Computational Natural Language Learning, Proceedings of the Conference
PB - Association for Computational Linguistics (ACL)
T2 - 26th Conference on Computational Natural Language Learning, CoNLL 2022 collocated and co-organized with EMNLP 2022
Y2 - 7 December 2022 through 8 December 2022
ER -