TY - GEN
T1 - Robustness and Adversarial Examples in Natural Language Processing
AU - Chang, Kai Wei
AU - He, He
AU - Jia, Robin
AU - Singh, Sameer
N1 - Publisher Copyright:
© 2021 Association for Computational Linguistics.
PY - 2021
Y1 - 2021
N2 - Recent studies show that many NLP systems are sensitive and vulnerable to a small perturbation of inputs and do not generalize well across different datasets. This lack of robustness derails the use of NLP systems in real-world applications. This tutorial aims at bringing awareness of practical concerns about NLP robustness. It targets NLP researchers and practitioners who are interested in building reliable NLP systems. In particular, we will review recent studies on analyzing the weakness of NLP systems when facing adversarial inputs and data with a distribution shift. We will provide the audience with a holistic view of 1) how to use adversarial examples to examine the weakness of NLP models and facilitate debugging; 2) how to enhance the robustness of existing NLP models and defense against adversarial inputs; and 3) how the consideration of robustness affects the real-world NLP applications used in our daily lives. We will conclude the tutorial by outlining future research directions in this area.
AB - Recent studies show that many NLP systems are sensitive and vulnerable to a small perturbation of inputs and do not generalize well across different datasets. This lack of robustness derails the use of NLP systems in real-world applications. This tutorial aims at bringing awareness of practical concerns about NLP robustness. It targets NLP researchers and practitioners who are interested in building reliable NLP systems. In particular, we will review recent studies on analyzing the weakness of NLP systems when facing adversarial inputs and data with a distribution shift. We will provide the audience with a holistic view of 1) how to use adversarial examples to examine the weakness of NLP models and facilitate debugging; 2) how to enhance the robustness of existing NLP models and defense against adversarial inputs; and 3) how the consideration of robustness affects the real-world NLP applications used in our daily lives. We will conclude the tutorial by outlining future research directions in this area.
UR - http://www.scopus.com/inward/record.url?scp=85127450005&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85127450005&partnerID=8YFLogxK
U2 - 10.18653/v1/2021.emnlp-tutorials.5
DO - 10.18653/v1/2021.emnlp-tutorials.5
M3 - Conference contribution
AN - SCOPUS:85127450005
T3 - EMNLP 2021 - 2021 Conference on Empirical Methods in Natural Language Processing: Tutorial Abstracts
SP - 22
EP - 26
BT - EMNLP 2021 - 2021 Conference on Empirical Methods in Natural Language Processing
PB - Association for Computational Linguistics (ACL)
T2 - 2021 Conference on Empirical Methods in Natural Language Processing, EMNLP 2021
Y2 - 7 November 2021 through 11 November 2021
ER -