Toward a comprehension challenge, using crowdsourcing as a tool

Praveen Paritosh, Gary Marcus

Research output: Contribution to journalArticlepeer-review


Human readers comprehend vastly more, and in vastly different ways, than any existing comprehension test would suggest. An ideal comprehension test for a story should cover the full range of questions and answers that humans would expect other humans to reasonably learn or infer from a given story. As a step toward these goals we propose a novel test, the crowdsourced comprehension challenge (C3), which is constructed by repeated runs of a three-person game, the Iterative Crowdsourced Comprehension Game (ICCG). ICCG uses structured crowdsourcing to comprehensively generate relevant questions and supported answers for arbitrary stories, whether fiction or nonfiction, presented across a variety of media such as videos, podcasts, and still images.

Original languageEnglish (US)
Pages (from-to)23-30
Number of pages8
JournalAI Magazine
Issue number1
StatePublished - 2016

ASJC Scopus subject areas

  • Artificial Intelligence


Dive into the research topics of 'Toward a comprehension challenge, using crowdsourcing as a tool'. Together they form a unique fingerprint.

Cite this