TY - CONF
T1 - SQuALITY
T2 - 2022 Conference on Empirical Methods in Natural Language Processing, EMNLP 2022
AU - Wang, Alex
AU - Pang, Richard Yuanzhe
AU - Chen, Angelica
AU - Phang, Jason
AU - Bowman, Samuel R.
N1 - Funding Information:
We thank the writers who created and reviwe ed the summaries: Christina Li, Cathy Liu, Mateo Pardo,AlexandraRumyvaa,Pei-Lingntse WuAli-, cia Chatten, Dolly Farha, Jamie Swang, Isaiaher Swanson, and other anonymous writers. We also thank the members of the ML2 lab at NYU for providing helpful feedback in the early stages of this project, particularly Nitish Joshi, Nikita Nangia, and KyunyunghCho. Fin,awelly thank Peter Liu, WojcieKrycs´hcńiskSebastiani, Gehrmann for helpful discussions about summarization datasets. This project has beneted fromnancial support toSBbyEricandWendSchmidty (madebyrec-ommendation of the Schmidt Futures program) and Apple, and from in-kind support by the NYU High-Performance Computing Center and Google Cloud. This material is based upon work supported by the National Science Foundation under Grant Nos. 1922658 and 2046556. Any opinions,ndings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reect the viweofs the National Science Foundation.
Funding Information:
We thank the writers who created and reviewed the summaries: Christina Li, Cathy Liu, Mateo Pardo, Alexandra Rumyantseva, Pei-Ling Wu, Alicia Chatten, Dolly Farha, Jamie Swanger, Isaiah Swanson, and other anonymous writers. We also thank the members of the ML2 lab at NYU for providing helpful feedback in the early stages of this project, particularly Nitish Joshi, Nikita Nangia, and Kyunghyun Cho. Finally, we thank Peter Liu, Wojciech Kryściński, Sebastian Gehrmann for helpful discussions about summarization datasets. This project has benefited from financial support to SB by Eric and Wendy Schmidt (made by recommendation of the Schmidt Futures program) and Apple, and from in-kind support by the NYU High-Performance Computing Center and Google Cloud. This material is based upon work supported by the National Science Foundation under Grant Nos. 1922658 and 2046556. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.
Publisher Copyright:
© 2022 Association for Computational Linguistics.
PY - 2022
Y1 - 2022
N2 - Summarization datasets are often assembled either by scraping naturally occurring public-domain summaries-which are nearly always in difficult-to-work-with technical domains-or by using approximate heuristics to extract them from everyday text-which frequently yields unfaithful summaries. In this work, we turn to a slower but more straightforward approach to developing summarization benchmark data: We hire highly-qualified contractors to read stories and write original summaries from scratch. To amortize reading time, we collect five summaries per document, with the first giving an overview and the subsequent four addressing specific questions. We use this protocol to collect SQuALITY, a dataset of question-focused summaries built on the same public-domain short stories as the multiple-choice dataset QuALITY (Pang et al., 2021b). Experiments with state-of-the-art summarization systems show that our dataset is challenging and that existing automatic evaluation metrics are weak indicators of quality. SQuALITY is available at https://github.com/nyu-mll/SQuALITY.
AB - Summarization datasets are often assembled either by scraping naturally occurring public-domain summaries-which are nearly always in difficult-to-work-with technical domains-or by using approximate heuristics to extract them from everyday text-which frequently yields unfaithful summaries. In this work, we turn to a slower but more straightforward approach to developing summarization benchmark data: We hire highly-qualified contractors to read stories and write original summaries from scratch. To amortize reading time, we collect five summaries per document, with the first giving an overview and the subsequent four addressing specific questions. We use this protocol to collect SQuALITY, a dataset of question-focused summaries built on the same public-domain short stories as the multiple-choice dataset QuALITY (Pang et al., 2021b). Experiments with state-of-the-art summarization systems show that our dataset is challenging and that existing automatic evaluation metrics are weak indicators of quality. SQuALITY is available at https://github.com/nyu-mll/SQuALITY.
UR - http://www.scopus.com/inward/record.url?scp=85149436499&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85149436499&partnerID=8YFLogxK
M3 - Paper
AN - SCOPUS:85149436499
SP - 1139
EP - 1156
Y2 - 7 December 2022 through 11 December 2022
ER -