In-context Learning Generalizes, But Not Always Robustly: The Case of Syntax

Aaron Mueller, Albert Webson, Jackson Petty, Tal Linzen

    Research output: Chapter in Book/Report/Conference proceedingConference contribution

    Abstract

    In-context learning (ICL) is now a common method for teaching large language models (LLMs) new tasks: given labeled examples in the input context, the LLM learns to perform the task without weight updates. Do models guided via ICL infer the underlying structure of the task defined by the context, or do they rely on superficial heuristics that only generalize to identically distributed examples? We address this question using transformations tasks and an NLI task that assess sensitivity to syntax—a requirement for robust language understanding. We further investigate whether out-of-distribution generalization can be improved via chain-of-thought prompting, where the model is provided with a sequence of intermediate computation steps that illustrate how the task ought to be performed. In experiments with models from the GPT, PaLM, and Llama 2 families, we find large variance across LMs. The variance is explained more by the composition of the pre-training corpus and supervision methods than by model size; in particular, models pre-trained on code generalize better, and benefit more from chain-of-thought prompting.

    Original languageEnglish (US)
    Title of host publicationLong Papers
    EditorsKevin Duh, Helena Gomez, Steven Bethard
    PublisherAssociation for Computational Linguistics (ACL)
    Pages4761-4779
    Number of pages19
    ISBN (Electronic)9798891761148
    DOIs
    StatePublished - 2024
    Event2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL 2024 - Hybrid, Mexico City, Mexico
    Duration: Jun 16 2024Jun 21 2024

    Publication series

    NameProceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL 2024
    Volume1

    Conference

    Conference2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL 2024
    Country/TerritoryMexico
    CityHybrid, Mexico City
    Period6/16/246/21/24

    ASJC Scopus subject areas

    • Computer Networks and Communications
    • Hardware and Architecture
    • Information Systems
    • Software

    Fingerprint

    Dive into the research topics of 'In-context Learning Generalizes, But Not Always Robustly: The Case of Syntax'. Together they form a unique fingerprint.

    Cite this