TY - GEN
T1 - Investigating Lexical Replacements for Arabic-English Code-Switched Data Augmentation
AU - Hamed, Injy
AU - Habash, Nizar
AU - Abdennadher, Slim
AU - Vu, Ngoc Thang
N1 - Publisher Copyright:
© 2023 Association for Computational Linguistics.
PY - 2023
Y1 - 2023
N2 - Data sparsity is a main problem hindering the development of code-switching (CS) NLP systems. In this paper, we investigate data augmentation techniques for synthesizing dialectal Arabic-English CS text. We perform lexical replacements using word-aligned parallel corpora where CS points are either randomly chosen or learnt using a sequence-to-sequence model. We compare these approaches against dictionary-based replacements. We assess the quality of the generated sentences through human evaluation and evaluate the effectiveness of data augmentation on machine translation (MT), automatic speech recognition (ASR), and speech translation (ST) tasks. Results show that using a predictive model results in more natural CS sentences compared to the random approach, as reported in human judge-ments. In the downstream tasks, despite the random approach generating more data, both approaches perform equally (outperforming dictionary-based replacements). Overall, data augmentation achieves 34% improvement in perplexity, 5.2% relative improvement on WER for ASR task, +4.0-5.1 BLEU points on MT task, and +2.1-2.2 BLEU points on ST over a baseline trained on available data without augmentation.
AB - Data sparsity is a main problem hindering the development of code-switching (CS) NLP systems. In this paper, we investigate data augmentation techniques for synthesizing dialectal Arabic-English CS text. We perform lexical replacements using word-aligned parallel corpora where CS points are either randomly chosen or learnt using a sequence-to-sequence model. We compare these approaches against dictionary-based replacements. We assess the quality of the generated sentences through human evaluation and evaluate the effectiveness of data augmentation on machine translation (MT), automatic speech recognition (ASR), and speech translation (ST) tasks. Results show that using a predictive model results in more natural CS sentences compared to the random approach, as reported in human judge-ments. In the downstream tasks, despite the random approach generating more data, both approaches perform equally (outperforming dictionary-based replacements). Overall, data augmentation achieves 34% improvement in perplexity, 5.2% relative improvement on WER for ASR task, +4.0-5.1 BLEU points on MT task, and +2.1-2.2 BLEU points on ST over a baseline trained on available data without augmentation.
UR - http://www.scopus.com/inward/record.url?scp=85174860608&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85174860608&partnerID=8YFLogxK
M3 - Conference contribution
AN - SCOPUS:85174860608
T3 - 6th Workshop on Technologies for Machine Translation of Low-Resource Languages, LoResMT 2023 - Proceedings
SP - 86
EP - 100
BT - 6th Workshop on Technologies for Machine Translation of Low-Resource Languages, LoResMT 2023 - Proceedings
PB - Association for Computational Linguistics
T2 - 6th Workshop on Technologies for Machine Translation of Low-Resource Languages, LoResMT 2023
Y2 - 6 May 2023
ER -