Towards AI-complete question answering: A set of prerequisite toy tasks

Jason Weston, Antoine Bordes, Sumit Chopra, Alexander M. Rush, Bart Van Merriënboer, Armand Joulin, Tomas Mikolov

Research output: Contribution to conferencePaperpeer-review

Abstract

One long-term goal of machine learning research is to produce methods that are applicable to reasoning and natural language, in particular building an intelligent dialogue agent. To measure progress towards that goal, we argue for the usefulness of a set of proxy tasks that evaluate reading comprehension via question answering. Our tasks measure understanding in several ways: whether a system is able to answer questions via chaining facts, simple induction, deduction and many more. The tasks are designed to be prerequisites for any system that aims to be capable of conversing with a human. We believe many existing learning systems can currently not solve them, and hence our aim is to classify these tasks into skill sets, so that researchers can identify (and then rectify) the failings of their systems. We also extend and improve the recently introduced Memory Networks model, and show it is able to solve some, but not all, of the tasks.

Original languageEnglish (US)
StatePublished - 2016
Event4th International Conference on Learning Representations, ICLR 2016 - San Juan, Puerto Rico
Duration: May 2 2016May 4 2016

Conference

Conference4th International Conference on Learning Representations, ICLR 2016
Country/TerritoryPuerto Rico
CitySan Juan
Period5/2/165/4/16

ASJC Scopus subject areas

  • Education
  • Computer Science Applications
  • Linguistics and Language
  • Language and Linguistics

Fingerprint

Dive into the research topics of 'Towards AI-complete question answering: A set of prerequisite toy tasks'. Together they form a unique fingerprint.

Cite this