TY - JOUR
T1 - Meaning creation in novel noun-noun compounds
T2 - humans and language models
AU - Chen, Phoebe
AU - Poeppel, David
AU - Zuanazzi, Arianna
N1 - Publisher Copyright:
© 2023 Informa UK Limited, trading as Taylor & Francis Group.
PY - 2023
Y1 - 2023
N2 - The interpretation of novel noun-noun compounds (NNCs, e.g. “devil salary”) requires the combination of nouns in the absence of syntactic cues, an interesting facet of complex meaning creation. Here we examine unconstrained interpretations of a large set of novel NNCs, to investigate how NNC constituents are combined into novel complex meanings. The data show that words’ lexical-semantic features (e.g. material, agentivity, imageability, semantic similarity) differentially contribute to the grammatical relations and the semantics of NNC interpretations. Further, we demonstrate that passive interpretations incur higher processing cost (longer interpretation times and more eye-movements) than active interpretations. Finally, we show that large language models (GPT-2, BERT, RoBERTa) can predict whether a NNC is interpretable by human participants and estimate differences in processing cost, but do not exhibit sensitivity to more subtle grammatical differences. The experiments illuminate how humans can use lexical-semantic features to interpret NNCs in the absence of explicit syntactic information.
AB - The interpretation of novel noun-noun compounds (NNCs, e.g. “devil salary”) requires the combination of nouns in the absence of syntactic cues, an interesting facet of complex meaning creation. Here we examine unconstrained interpretations of a large set of novel NNCs, to investigate how NNC constituents are combined into novel complex meanings. The data show that words’ lexical-semantic features (e.g. material, agentivity, imageability, semantic similarity) differentially contribute to the grammatical relations and the semantics of NNC interpretations. Further, we demonstrate that passive interpretations incur higher processing cost (longer interpretation times and more eye-movements) than active interpretations. Finally, we show that large language models (GPT-2, BERT, RoBERTa) can predict whether a NNC is interpretable by human participants and estimate differences in processing cost, but do not exhibit sensitivity to more subtle grammatical differences. The experiments illuminate how humans can use lexical-semantic features to interpret NNCs in the absence of explicit syntactic information.
KW - Noun-noun compounds
KW - eye-tracking
KW - language models
KW - lexical-semantic features
KW - verb diathesis
UR - http://www.scopus.com/inward/record.url?scp=85170670425&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85170670425&partnerID=8YFLogxK
U2 - 10.1080/23273798.2023.2254865
DO - 10.1080/23273798.2023.2254865
M3 - Article
AN - SCOPUS:85170670425
SN - 2327-3798
JO - Language, Cognition and Neuroscience
JF - Language, Cognition and Neuroscience
ER -