Evaluating Attribution in Dialogue Systems: The BEGIN Benchmark

Nouha Dziri, Hannah Rashkin, Tal Linzen, David Reitter

    Research output: Contribution to journalArticlepeer-review

    Abstract

    Knowledge-grounded dialogue systems powered by large language models often generate responses that, while fluent, are not attributable to a relevant source of information. Progress towards models that do not exhibit this issue requires evaluation metrics that can quantify its prevalence. To this end, we introduce the Benchmark for Evaluation of Grounded INteraction (BEGIN), comprising 12k dialogue turns generated by neural dialogue systems trained on three knowledge-grounded dialogue corpora. We collect human annotations assessing the extent to which the models’ responses can be attributed to the given background information. We then use BEGIN to analyze eight evaluation metrics. We find that these metrics rely on spurious correlations, do not reliably distinguish attributable abstractive responses from unattributable ones, and perform substantially worse when the knowledge source is longer. Our findings underscore the need for more sophisticated and robust evaluation metrics for knowledge-grounded dialogue. We make BEGIN publicly available at https:// github.com/google/BEGIN-dataset.

    Original languageEnglish (US)
    Pages (from-to)1066-1083
    Number of pages18
    JournalTransactions of the Association for Computational Linguistics
    Volume10
    DOIs
    StatePublished - 2022

    ASJC Scopus subject areas

    • Communication
    • Human-Computer Interaction
    • Linguistics and Language
    • Computer Science Applications
    • Artificial Intelligence

    Fingerprint

    Dive into the research topics of 'Evaluating Attribution in Dialogue Systems: The BEGIN Benchmark'. Together they form a unique fingerprint.

    Cite this