Can recursive neural tensor networks learn logical reasoning?

Samuel R. Bowman

    Research output: Contribution to conferencePaperpeer-review

    Abstract

    Recursive neural network models and their accompanying vector representations for words have seen success in an array of increasingly semantically sophisticated tasks, but almost nothing is known about their ability to accurately capture the aspects of linguistic meaning that are necessary for interpretation or reasoning. To evaluate this, I train a recursive model on a new corpus of constructed examples of logical reasoning in short sentences, like the inference of some animal walks from some dog walks or some cat walks, given that dogs and cats are animals. This model learns representations that generalize well to new types of reasoning pattern in all but a few cases, a result which is promising for the ability of learned representation models to capture logical reasoning.

    Original languageEnglish (US)
    StatePublished - Jan 1 2014
    Event2nd International Conference on Learning Representations, ICLR 2014 - Banff, Canada
    Duration: Apr 14 2014Apr 16 2014

    Conference

    Conference2nd International Conference on Learning Representations, ICLR 2014
    Country/TerritoryCanada
    CityBanff
    Period4/14/144/16/14

    ASJC Scopus subject areas

    • Computer Science Applications
    • Linguistics and Language
    • Language and Linguistics
    • Education

    Fingerprint

    Dive into the research topics of 'Can recursive neural tensor networks learn logical reasoning?'. Together they form a unique fingerprint.

    Cite this