TY - GEN
T1 - SPAWNing Structural Priming Predictions from a Cognitively Motivated Parser
AU - Prasad, Grusha
AU - Linzen, Tal
N1 - Publisher Copyright:
© 2024 Association for Computational Linguistics.
PY - 2024
Y1 - 2024
N2 - Structural priming is a widely used psycholinguistic paradigm to study human sentence representations. In this work we introduce SPAWN, a cognitively motivated parser that can generate quantitative priming predictions from contemporary theories in syntax which assume a lexicalized grammar. By generating and testing priming predictions from competing theoretical accounts, we can infer which assumptions from syntactic theory are useful for characterizing the representations humans build when processing sentences. As a case study, we use SPAWN to generate priming predictions from two theories (Whiz-Deletion and Participial-Phase) which make different assumptions about the structure of English relative clauses. By modulating the reanalysis mechanism that the parser uses and strength of the parser’s prior knowledge, we generated nine sets of predictions from each of the two theories. Then, we tested these predictions using a novel web-based comprehension-to-production priming paradigm. We found that while the some of the predictions from the Participial-Phase theory aligned with human behavior, none of the predictions from the the Whiz-Deletion theory did, thus suggesting that the Participial-Phase theory might better characterize human relative clause representations.
AB - Structural priming is a widely used psycholinguistic paradigm to study human sentence representations. In this work we introduce SPAWN, a cognitively motivated parser that can generate quantitative priming predictions from contemporary theories in syntax which assume a lexicalized grammar. By generating and testing priming predictions from competing theoretical accounts, we can infer which assumptions from syntactic theory are useful for characterizing the representations humans build when processing sentences. As a case study, we use SPAWN to generate priming predictions from two theories (Whiz-Deletion and Participial-Phase) which make different assumptions about the structure of English relative clauses. By modulating the reanalysis mechanism that the parser uses and strength of the parser’s prior knowledge, we generated nine sets of predictions from each of the two theories. Then, we tested these predictions using a novel web-based comprehension-to-production priming paradigm. We found that while the some of the predictions from the Participial-Phase theory aligned with human behavior, none of the predictions from the the Whiz-Deletion theory did, thus suggesting that the Participial-Phase theory might better characterize human relative clause representations.
UR - http://www.scopus.com/inward/record.url?scp=85215530666&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85215530666&partnerID=8YFLogxK
M3 - Conference contribution
AN - SCOPUS:85215530666
T3 - CoNLL 2024 - 28th Conference on Computational Natural Language Learning, Proceedings of the Conference
SP - 178
EP - 197
BT - CoNLL 2024 - 28th Conference on Computational Natural Language Learning, Proceedings of the Conference
A2 - Barak, Libby
A2 - Alikhani, Malihe
PB - Association for Computational Linguistics (ACL)
T2 - 28th Conference on Computational Natural Language Learning, CoNLL 2024
Y2 - 15 November 2024 through 16 November 2024
ER -