Learning distributed word representations for natural logic reasoning

Samuel R. Bowman, Christopher Potts, Christopher D. Manning

    Research output: Chapter in Book/Report/Conference proceedingConference contribution

    Abstract

    Natural logic offers a powerful relational conception of meaning that is a natural counterpart to distributed semantic representations, which have proven valuable in a wide range of sophisticated language tasks. However, it remains an open question whether it is possible to train distributed representations to support the rich, diverse logical reasoning captured by natural logic. We address this question using two neural network-based models for learning embeddings: plain neural networks and neural tensor networks. Our experiments evaluate the models' ability to learn the basic algebra of natural logic relations from simulated data and from the Word-Net noun graph. The overall positive results are promising for the future of learned distributed representations in the applied modeling of logical semantics.

    Original languageEnglish (US)
    Title of host publicationKnowledge Representation and Reasoning
    Subtitle of host publicationIntegrating Symbolic and Neural Approaches - Papers from the AAAI Spring Symposium, Technical Report
    PublisherAI Access Foundation
    Pages10-13
    Number of pages4
    ISBN (Electronic)9781577357070
    StatePublished - 2015
    Event2015 AAAI Spring Symposium - Palo Alto, United States
    Duration: Mar 23 2015Mar 25 2015

    Publication series

    NameAAAI Spring Symposium - Technical Report
    VolumeSS-15-03

    Other

    Other2015 AAAI Spring Symposium
    Country/TerritoryUnited States
    CityPalo Alto
    Period3/23/153/25/15

    ASJC Scopus subject areas

    • Artificial Intelligence

    Fingerprint

    Dive into the research topics of 'Learning distributed word representations for natural logic reasoning'. Together they form a unique fingerprint.

    Cite this