A View From The Crowd: Evaluation Challenges for Time-Offset Interaction Applications

Alberto M. Chierici, Nizar Habash

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Dialogue systems like chatbots, and tasks like question-answering (QA) have gained traction in recent years; yet evaluating such systems remains difficult. Reasons include the great variety in contexts and use cases for these systems as well as the high cost of human evaluation. In this paper, we focus on a specific type of dialogue systems: Time-Offset Interaction Applications (TOIAs) are intelligent, conversational software that simulates face-to-face conversations between humans and pre-recorded human avatars. Under the constraint that a TOIA is a single output system interacting with users with different expectations, we identify two challenges: first, how do we define a ‘good’ answer? and second, what’s an appropriate metric to use? We explore both challenges through the creation of a novel dataset that identifies multiple good answers to specific TOIA questions through the help of Amazon Mechanical Turk workers. This ‘view from the crowd’ allows us to study the variations of how TOIA interrogators perceive its answers. Our contributions include the annotated dataset that we make publicly available and the proposal of Success Rate @k as an evaluation metric that is more appropriate than the traditional QA’s and information retrieval’s metrics.

Original languageEnglish (US)
Title of host publicationHuman Evaluation of NLP Systems, HumEval 2021 - Proceedings of the Workshop, as part of the 16th Conference of the European Chapter of the Association for Computational Linguistics, EACL 2021
EditorsAnya Belz, Shubham Agarwal, Yvette Graham, Ehud Reiter, Anastasia Shimorina
PublisherAssociation for Computational Linguistics (ACL)
Pages75-85
Number of pages11
ISBN (Electronic)9781954085107
StatePublished - 2021
Event1st Workshop on Human Evaluation of NLP Systems, HumEval 2021 - Virtual, Online
Duration: Apr 19 2021 → …

Publication series

NameHuman Evaluation of NLP Systems, HumEval 2021 - Proceedings of the Workshop, as part of the 16th Conference of the European Chapter of the Association for Computational Linguistics, EACL 2021

Conference

Conference1st Workshop on Human Evaluation of NLP Systems, HumEval 2021
CityVirtual, Online
Period4/19/21 → …

ASJC Scopus subject areas

  • Information Systems
  • Computer Science Applications
  • Computational Theory and Mathematics

Fingerprint

Dive into the research topics of 'A View From The Crowd: Evaluation Challenges for Time-Offset Interaction Applications'. Together they form a unique fingerprint.

Cite this