Does syntax need to grow on trees? Sources of hierarchical inductive bias in sequence-to-sequence networks

R. Thomas McCoy, Robert Frank, Tal Linzen

    Research output: Contribution to journalArticlepeer-review

    Abstract

    Learners that are exposed to the same training data might generalize differently due to differing inductive biases. In neural network models, inductive biases could in theory arise from any aspect of the model architecture. We investigate which architectural factors affect the generalization behavior of neural sequence-to-sequence models trained on two syntactic tasks, English question formation and English tense reinflection. For both tasks, the training set is consistent with a generalization based on hierarchical structure and a generalization based on linear order. All architectural factors that we investigated qualitatively affected how models generalized, including factors with no clear connection to hierarchical structure. For example, LSTMs and GRUs displayed qualitatively different inductive biases. However, the only factor that consistently contributed a hierarchical bias across tasks was the use of a tree-structured model rather than a model with sequential recurrence, suggesting that human-like syntactic generalization requires architectural syntactic structure.

    Original languageEnglish (US)
    Pages (from-to)125-140
    Number of pages16
    JournalTransactions of the Association for Computational Linguistics
    Volume8
    DOIs
    StatePublished - 2020

    ASJC Scopus subject areas

    • Communication
    • Human-Computer Interaction
    • Linguistics and Language
    • Computer Science Applications
    • Artificial Intelligence

    Fingerprint

    Dive into the research topics of 'Does syntax need to grow on trees? Sources of hierarchical inductive bias in sequence-to-sequence networks'. Together they form a unique fingerprint.

    Cite this