Abstract
In this paper we present a character-level sequence-to-sequence lemmatization model, utilizing several subword features in multiple configurations. In addition to generic n-gram embeddings (using FastText), we experiment with concatenative (stems) and templatic (roots and patterns) morphological subwords. We present several architectures that embed these features directly at the encoder side, or learn them jointly at the decoder side with a multitask learning architecture. The results indicate that using the generic n-gram embeddings (through FastText) outperform the other linguistically-driven subwords. We use Modern Standard Arabic and Egyptian Arabic as test cases, with up to 22% and 13% relative error reduction, respectively, from a strong baseline. An error analysis shows that our best system is even able to handle word/lemma pairs that are both unseen in the training data.
Original language | English (US) |
---|---|
Title of host publication | Proceedings of the 28th International Conference on Computational Linguistics |
Place of Publication | Barcelona, Spain (Online) |
Publisher | International Committee on Computational Linguistics |
Pages | 4676-4682 |
Number of pages | 7 |
DOIs | |
State | Published - Dec 1 2020 |