We present de-lexical segmentation, a linguistically motivated alternative to greedy or other unsupervised methods, requiring only minimal language specific input. Our technique involves creating a small grammar of closed-class affixes which can be written in a few hours. The grammar over generates analyses for word forms attested in a raw corpus which are disambiguated based on features of the linguistic base proposed for each form. Extending the grammar to cover orthographic, morpho-syntactic or lexical variation is simple, making it an ideal solution for challenging corpora with noisy, dialect-inconsistent, or otherwise non-standard content. In two evaluations, we consistently outperform competitive unsupervised baselines and approach the performance of state-of-the-art supervised models trained on large amounts of data, providing evidence for the value of linguistic input during preprocessing.
|Title of host publication||Proceedings of the 16th Workshop on Computational Research in Phonetics, Phonology, and Morphology|
|Place of Publication||Florence, Italy|
|Publisher||Association for Computational Linguistics (ACL)|
|Number of pages||12|
|State||Published - Aug 1 2019|
Erdmann, A., Khalifa, S., Oudah, M., Habash, N., & Bouamor, H. (2019). A Little Linguistics Goes a Long Way: Unsupervised Segmentation with Limited Language Specific Guidance. In Proceedings of the 16th Workshop on Computational Research in Phonetics, Phonology, and Morphology (pp. 113-124). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/W19-4214