TY - GEN
T1 - QuALITY
T2 - 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL 2022
AU - Pang, Richard Yuanzhe
AU - Parrish, Alicia
AU - Joshi, Nitish
AU - Nangia, Nikita
AU - Phang, Jason
AU - Chen, Angelica
AU - Padmakumar, Vishakh
AU - Ma, Johnny
AU - Thompson, Jana
AU - He, He
AU - Bowman, Samuel R.
N1 - Funding Information:
This project has benefited from financial support to SB by Eric and Wendy Schmidt (made by recommendation of the Schmidt Futures program), Samsung Research (under the project Improving Deep Learning using Latent Structure), Samsung Advanced Institute of Technology (under the project Next Generation Deep Learning: From Pattern Recognition to AI), and Apple. This material is based upon work supported by the National Science Foundation under Grant Nos. 1922658 and 2046556. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation. We thank Jon Ander Campos, Alex Wang, Saku Sugawara, and Omer Levy for valuable discussion. We thank the anonymous reviewers for useful feedback. Finally, we thank the writers who wrote our source texts (credited in the data itself) and the writers who wrote our questions: Megan Barbee, Bridget Barrett, Kourtney Bradley, Kyle J. Brown, Alicia Chatten, Christine D., Leah Dorschner-Karim, Bobbie Dunn, Charisse Hake, Javier Hernandez, Molly Montgomery, Carilee Moran, Tracy M. Snyder, Lorna Stevenson, Isaiah Swanson, Kyla Thiel, Lisa V., Ryan Warrick, Julia Williamson, and others who chose to remain anonymous.
Funding Information:
This project has benefited from financial support to SB by Eric and Wendy Schmidt (made by recommendation of the Schmidt Futures program), Sam-sung Research (under the project Improving Deep Learning using Latent Structure), Samsung Advanced Institute of Technology (under the project Next Generation Deep Learning: From Pattern Recognition to AI), and Apple. This material is based upon work supported by the National Science Foundation under Grant Nos. 1922658 and 2046556. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation.
Publisher Copyright:
© 2022 Association for Computational Linguistics.
PY - 2022
Y1 - 2022
N2 - To enable building and testing models on long-document comprehension, we introduce QuALITY, a multiple-choice QA dataset with context passages in English that have an average length of about 5,000 tokens, much longer than typical current models can process. Unlike in prior work with passages, our questions are written and validated by contributors who have read the entire passage, rather than relying on summaries or excerpts. In addition, only half of the questions are answerable by annotators working under tight time constraints, indicating that skimming and simple search are not enough to consistently perform well. Our baseline models perform poorly on this task (55.4%) and significantly lag behind human performance (93.5%).
AB - To enable building and testing models on long-document comprehension, we introduce QuALITY, a multiple-choice QA dataset with context passages in English that have an average length of about 5,000 tokens, much longer than typical current models can process. Unlike in prior work with passages, our questions are written and validated by contributors who have read the entire passage, rather than relying on summaries or excerpts. In addition, only half of the questions are answerable by annotators working under tight time constraints, indicating that skimming and simple search are not enough to consistently perform well. Our baseline models perform poorly on this task (55.4%) and significantly lag behind human performance (93.5%).
UR - http://www.scopus.com/inward/record.url?scp=85137223398&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85137223398&partnerID=8YFLogxK
M3 - Conference contribution
AN - SCOPUS:85137223398
T3 - NAACL 2022 - 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Proceedings of the Conference
SP - 5336
EP - 5358
BT - NAACL 2022 - 2022 Conference of the North American Chapter of the Association for Computational Linguistics
PB - Association for Computational Linguistics (ACL)
Y2 - 10 July 2022 through 15 July 2022
ER -