TY - GEN
T1 - Counterfactual Interventions Reveal the Causal Effect of Relative Clause Representations on Agreement Prediction
AU - Ravfogel, Shauli
AU - Prasad, Grusha
AU - Linzen, Tal
AU - Goldberg, Yoav
N1 - Publisher Copyright:
© 2021 Association for Computational Linguistics.
PY - 2021
Y1 - 2021
N2 - When language models process syntactically complex sentences, do they use their representations of syntax in a manner that is consistent with the grammar of the language? We propose AlterRep, an intervention-based method to address this question. For any linguistic feature of a given sentence, AlterRep generates counterfactual representations by altering how the feature is encoded, while leaving intact all other aspects of the original representation. By measuring the change in a model’s word prediction behavior when these counterfactual representations are substituted for the original ones, we can draw conclusions about the causal effect of the linguistic feature in question on the model’s behavior. We apply this method to study how BERT models of different sizes process relative clauses (RCs). We find that BERT variants use RC boundary information during word prediction in a manner that is consistent with the rules of English grammar; this RC boundary information generalizes to a considerable extent across different RC types, suggesting that BERT represents RCs as an abstract linguistic category.
AB - When language models process syntactically complex sentences, do they use their representations of syntax in a manner that is consistent with the grammar of the language? We propose AlterRep, an intervention-based method to address this question. For any linguistic feature of a given sentence, AlterRep generates counterfactual representations by altering how the feature is encoded, while leaving intact all other aspects of the original representation. By measuring the change in a model’s word prediction behavior when these counterfactual representations are substituted for the original ones, we can draw conclusions about the causal effect of the linguistic feature in question on the model’s behavior. We apply this method to study how BERT models of different sizes process relative clauses (RCs). We find that BERT variants use RC boundary information during word prediction in a manner that is consistent with the rules of English grammar; this RC boundary information generalizes to a considerable extent across different RC types, suggesting that BERT represents RCs as an abstract linguistic category.
UR - http://www.scopus.com/inward/record.url?scp=85124063576&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85124063576&partnerID=8YFLogxK
M3 - Conference contribution
AN - SCOPUS:85124063576
T3 - CoNLL 2021 - 25th Conference on Computational Natural Language Learning, Proceedings
SP - 194
EP - 209
BT - CoNLL 2021 - 25th Conference on Computational Natural Language Learning, Proceedings
A2 - Bisazza, Arianna
A2 - Abend, Omri
PB - Association for Computational Linguistics (ACL)
T2 - 25th Conference on Computational Natural Language Learning, CoNLL 2021
Y2 - 10 November 2021 through 11 November 2021
ER -