Using priming to uncover the organization of syntactic representations in neural language models

Grusha Prasad, Marten Van Schijndel, Tal Linzen

    Research output: Chapter in Book/Report/Conference proceedingConference contribution

    Abstract

    Neural language models (LMs) perform well on tasks that require sensitivity to syntactic structure. Drawing on the syntactic priming paradigm from psycholinguistics, we propose a novel technique to analyze the representations that enable such success. By establishing a gradient similarity metric between structures, this technique allows us to reconstruct the organization of the LMs' syntactic representational space. We use this technique to demonstrate that LSTM LMs' representations of different types of sentences with relative clauses are organized hierarchically in a linguistically interpretable manner, suggesting that the LMs track abstract properties of the sentence.

    Original languageEnglish (US)
    Title of host publicationCoNLL 2019 - 23rd Conference on Computational Natural Language Learning, Proceedings of the Conference
    PublisherAssociation for Computational Linguistics
    Pages66-76
    Number of pages11
    ISBN (Electronic)9781950737727
    StatePublished - 2019
    Event23rd Conference on Computational Natural Language Learning, CoNLL 2019 - Hong Kong, China
    Duration: Nov 3 2019Nov 4 2019

    Publication series

    NameCoNLL 2019 - 23rd Conference on Computational Natural Language Learning, Proceedings of the Conference

    Conference

    Conference23rd Conference on Computational Natural Language Learning, CoNLL 2019
    CountryChina
    CityHong Kong
    Period11/3/1911/4/19

    ASJC Scopus subject areas

    • Computer Science Applications
    • Information Systems
    • Computational Theory and Mathematics

    Fingerprint Dive into the research topics of 'Using priming to uncover the organization of syntactic representations in neural language models'. Together they form a unique fingerprint.

    Cite this