Coloring the Blank Slate: Pre-training Imparts a Hierarchical Inductive Bias to Sequence-to-sequence Models

Aaron Mueller, Robert Frank, Tal Linzen, Luheng Wang, Sebastian Schuster

    Research output: Chapter in Book/Report/Conference proceedingConference contribution

    Abstract

    Relations between words are governed by hierarchical structure rather than linear ordering. Sequence-to-sequence (seq2seq) models, despite their success in downstream NLP applications, often fail to generalize in a hierarchy-sensitive manner when performing syntactic transformations-for example, transforming declarative sentences into questions. However, syntactic evaluations of seq2seq models have only observed models that were not pre-trained on natural language data before being trained to perform syntactic transformations, in spite of the fact that pre-training has been found to induce hierarchical linguistic generalizations in language models; in other words, the syntactic capabilities of seq2seq models may have been greatly understated. We address this gap using the pre-trained seq2seq models T5 and BART, as well as their multilingual variants mT5 and mBART. We evaluate whether they generalize hierarchically on two transformations in two languages: question formation and passivization in English and German. We find that pre-trained seq2seq models generalize hierarchically when performing syntactic transformations, whereas models trained from scratch on syntactic transformations do not. This result presents evidence for the learnability of hierarchical syntactic information from non-annotated natural language text while also demonstrating that seq2seq models are capable of syntactic generalization, though only after exposure to much more language data than human learners receive.

    Original languageEnglish (US)
    Title of host publicationACL 2022 - 60th Annual Meeting of the Association for Computational Linguistics, Findings of ACL 2022
    EditorsSmaranda Muresan, Preslav Nakov, Aline Villavicencio
    PublisherAssociation for Computational Linguistics (ACL)
    Pages1352-1368
    Number of pages17
    ISBN (Electronic)9781955917254
    StatePublished - 2022
    Event60th Annual Meeting of the Association for Computational Linguistics, ACL 2022 - Dublin, Ireland
    Duration: May 22 2022May 27 2022

    Publication series

    NameProceedings of the Annual Meeting of the Association for Computational Linguistics
    ISSN (Print)0736-587X

    Conference

    Conference60th Annual Meeting of the Association for Computational Linguistics, ACL 2022
    Country/TerritoryIreland
    CityDublin
    Period5/22/225/27/22

    ASJC Scopus subject areas

    • Computer Science Applications
    • Linguistics and Language
    • Language and Linguistics

    Fingerprint

    Dive into the research topics of 'Coloring the Blank Slate: Pre-training Imparts a Hierarchical Inductive Bias to Sequence-to-sequence Models'. Together they form a unique fingerprint.

    Cite this