Abstract
Recursive neural network models and their accompanying vector representations for words have seen success in an array of increasingly semantically sophisticated tasks, but almost nothing is known about their ability to accurately capture the aspects of linguistic meaning that are necessary for interpretation or reasoning. To evaluate this, I train a recursive model on a new corpus of constructed examples of logical reasoning in short sentences, like the inference of some animal walks from some dog walks or some cat walks, given that dogs and cats are animals. This model learns representations that generalize well to new types of reasoning pattern in all but a few cases, a result which is promising for the ability of learned representation models to capture logical reasoning.
Original language | English (US) |
---|---|
State | Published - Jan 1 2014 |
Event | 2nd International Conference on Learning Representations, ICLR 2014 - Banff, Canada Duration: Apr 14 2014 → Apr 16 2014 |
Conference
Conference | 2nd International Conference on Learning Representations, ICLR 2014 |
---|---|
Country/Territory | Canada |
City | Banff |
Period | 4/14/14 → 4/16/14 |
ASJC Scopus subject areas
- Computer Science Applications
- Linguistics and Language
- Language and Linguistics
- Education