SQuALITY: Building a Long-Document Summarization Dataset the Hard Way

Alex Wang, Richard Yuanzhe Pang, Angelica Chen, Jason Phang, Samuel R. Bowman

    Research output: Contribution to conferencePaperpeer-review

    Abstract

    Summarization datasets are often assembled either by scraping naturally occurring public-domain summaries-which are nearly always in difficult-to-work-with technical domains-or by using approximate heuristics to extract them from everyday text-which frequently yields unfaithful summaries. In this work, we turn to a slower but more straightforward approach to developing summarization benchmark data: We hire highly-qualified contractors to read stories and write original summaries from scratch. To amortize reading time, we collect five summaries per document, with the first giving an overview and the subsequent four addressing specific questions. We use this protocol to collect SQuALITY, a dataset of question-focused summaries built on the same public-domain short stories as the multiple-choice dataset QuALITY (Pang et al., 2021b). Experiments with state-of-the-art summarization systems show that our dataset is challenging and that existing automatic evaluation metrics are weak indicators of quality. SQuALITY is available at https://github.com/nyu-mll/SQuALITY.

    Original languageEnglish (US)
    Pages1139-1156
    Number of pages18
    StatePublished - 2022
    Event2022 Conference on Empirical Methods in Natural Language Processing, EMNLP 2022 - Abu Dhabi, United Arab Emirates
    Duration: Dec 7 2022Dec 11 2022

    Conference

    Conference2022 Conference on Empirical Methods in Natural Language Processing, EMNLP 2022
    Country/TerritoryUnited Arab Emirates
    CityAbu Dhabi
    Period12/7/2212/11/22

    ASJC Scopus subject areas

    • Computational Theory and Mathematics
    • Computer Science Applications
    • Information Systems

    Fingerprint

    Dive into the research topics of 'SQuALITY: Building a Long-Document Summarization Dataset the Hard Way'. Together they form a unique fingerprint.

    Cite this