longhorns at DADC 2022: How many linguists does it take to fool a Question Answering model? A systematic approach to adversarial attacks

Venelin Kovatchev, Trina Chatterjee, Venkata S. Govindarajan, Jifan Chen, Eunsol Choi, Gabriella Chronis, Anubrata Das, Katrin Erk, Matthew Lease, Junyi Jessy Li, Yating Wu, Kyle Mahowald

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Developing methods to adversarially challenge NLP systems is a promising avenue for improving both model performance and interpretability. Here, we describe the approach of the team “longhorns” on Task 1 of the The First Workshop on Dynamic Adversarial Data Collection (DADC), which asked teams to manually fool a model on an Extractive Question Answering task. Our team finished first, with a model error rate of 62%. We advocate for a systematic, linguistically informed approach to formulating adversarial questions, and we describe the results of our pilot experiments, as well as our official submission.

Original languageEnglish (US)
Title of host publicationDADC 2022 - 1st Workshop on Dynamic Adversarial Data Collection, Proceedings of the Workshop
EditorsMax Bartolo, Hannah Rose Kirk, Pedro Rodriguez, Katerina Margatina, Tristan Thrush, Robin Jia, Pontus Stenetorp, Adina Williams, Douwe Kiela
PublisherAssociation for Computational Linguistics (ACL)
Pages41-52
Number of pages12
ISBN (Electronic)9781955917940
StatePublished - 2022
Event1st Workshop on Dynamic Adversarial Data Collection, DADC 2022 - Seattle, United States
Duration: Jul 14 2022 → …

Publication series

NameDADC 2022 - 1st Workshop on Dynamic Adversarial Data Collection, Proceedings of the Workshop

Conference

Conference1st Workshop on Dynamic Adversarial Data Collection, DADC 2022
Country/TerritoryUnited States
CitySeattle
Period7/14/22 → …

ASJC Scopus subject areas

  • Language and Linguistics
  • Computer Science Applications
  • Linguistics and Language

Fingerprint

Dive into the research topics of 'longhorns at DADC 2022: How many linguists does it take to fool a Question Answering model? A systematic approach to adversarial attacks'. Together they form a unique fingerprint.

Cite this