Do Neural Language Models Compose Concepts the Way Humans Can?

Amilleah Rodriguez, Shaonan Wang, Liina Pylkkanen

    Research output: Chapter in Book/Report/Conference proceedingConference contribution

    Abstract

    While compositional interpretation is the core of language understanding, humans also derive meaning via inference. For example, while the phrase “the blue hat” introduces a blue hat into the discourse via the direct composition of “blue” and “hat,” the same discourse entity is introduced by the phrase “the blue color of this hat” despite the absence of any local composition between “blue” and “hat.” Instead, we infer that if the color is blue and it belongs to the hat, the hat must be blue. We tested the performance of neural language models and humans on such inferentially driven conceptual compositions, eliciting probability estimates for a noun in a syntactically composing phrase, "This blue hat", following contexts that had introduced the conceptual combinations of those nouns and adjectives either syntactically or inferentially. Surprisingly, our findings reveal significant disparities between the performance of neural language models and human judgments. Among the eight models evaluated, RoBERTa, BERT-large, and GPT-2 exhibited the closest resemblance to human responses, while other models faced challenges in accurately identifying compositions in the provided contexts. Our study reveals that language models and humans may rely on different approaches to represent and compose lexical items across sentence structure. All data and code are accessible at https://github.com/wangshaonan/BlueHat.

    Original languageEnglish (US)
    Title of host publication2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation, LREC-COLING 2024 - Main Conference Proceedings
    EditorsNicoletta Calzolari, Min-Yen Kan, Veronique Hoste, Alessandro Lenci, Sakriani Sakti, Nianwen Xue
    PublisherEuropean Language Resources Association (ELRA)
    Pages5309-5314
    Number of pages6
    ISBN (Electronic)9782493814104
    StatePublished - 2024
    EventJoint 30th International Conference on Computational Linguistics and 14th International Conference on Language Resources and Evaluation, LREC-COLING 2024 - Hybrid, Torino, Italy
    Duration: May 20 2024May 25 2024

    Publication series

    Name2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation, LREC-COLING 2024 - Main Conference Proceedings

    Conference

    ConferenceJoint 30th International Conference on Computational Linguistics and 14th International Conference on Language Resources and Evaluation, LREC-COLING 2024
    Country/TerritoryItaly
    CityHybrid, Torino
    Period5/20/245/25/24

    Keywords

    • Composition
    • Dataset Construction
    • Inference
    • Neural Language Models

    ASJC Scopus subject areas

    • Theoretical Computer Science
    • Computational Theory and Mathematics
    • Computer Science Applications

    Fingerprint

    Dive into the research topics of 'Do Neural Language Models Compose Concepts the Way Humans Can?'. Together they form a unique fingerprint.

    Cite this