Abstract
We describe a new class of learning models called memory networks. Memory networks reason with inference components combined with a long-term memory component; they learn how to use these jointly. The long-term memory can be read and written to, with the goal of using it for prediction. We investigate these models in the context of question answering (QA) where the long-term memory effectively acts as a (dynamic) knowledge base, and the output is a textual response. We evaluate them on a large-scale QA task, and a smaller, but more complex, toy task generated from a simulated world. In the latter, we show the reasoning power of such models by chaining multiple supporting sentences to answer questions that require understanding the intension of verbs.
Original language | English (US) |
---|---|
State | Published - 2015 |
Event | 3rd International Conference on Learning Representations, ICLR 2015 - San Diego, United States Duration: May 7 2015 → May 9 2015 |
Conference
Conference | 3rd International Conference on Learning Representations, ICLR 2015 |
---|---|
Country/Territory | United States |
City | San Diego |
Period | 5/7/15 → 5/9/15 |
ASJC Scopus subject areas
- Education
- Computer Science Applications
- Linguistics and Language
- Language and Linguistics