TY - GEN
T1 - Applying the cognitive machine translation evaluation approach to Arabic
AU - Temnikova, Irina
AU - Zaghouani, Wajdi
AU - Vogel, Stephan
AU - Habash, Nizar
N1 - Funding Information:
We thank the anonymous reviewers for their valuable comments and suggestions. The second and fourth authors, Zaghouani and Habash, as well as the QALB post-editors, were funded by grant NPRP-4-1058-1-168 from the Qatar National Research Fund (a member of the Qatar Foundation).
PY - 2016
Y1 - 2016
N2 - The goal of the cognitive machine translation (MT) evaluation approach is to build classifiers which assign post-editing effort scores to new texts. The approach helps estimate fair compensation for post-editors in the translation industry by evaluating the cognitive difficulty of post-editing MT output. The approach counts the number of errors classified in different categories on the basis of how much cognitive effort they require in order to be corrected. In this paper, we present the results of applying an existing cognitive evaluation approach to Modern Standard Arabic (MSA). We provide a comparison of the number of errors and categories of errors in three MSA texts of different MT quality (without any language-specific adaptation), as well as a comparison between MSA texts and texts from three Indo-European languages (Russian, Spanish, and Bulgarian), taken from a previous experiment. The results show how the error distributions change passing from the MSA texts of worse MT quality to MSA texts of better MT quality, as well as a similarity in distinguishing the texts of better MT quality for all four languages.
AB - The goal of the cognitive machine translation (MT) evaluation approach is to build classifiers which assign post-editing effort scores to new texts. The approach helps estimate fair compensation for post-editors in the translation industry by evaluating the cognitive difficulty of post-editing MT output. The approach counts the number of errors classified in different categories on the basis of how much cognitive effort they require in order to be corrected. In this paper, we present the results of applying an existing cognitive evaluation approach to Modern Standard Arabic (MSA). We provide a comparison of the number of errors and categories of errors in three MSA texts of different MT quality (without any language-specific adaptation), as well as a comparison between MSA texts and texts from three Indo-European languages (Russian, Spanish, and Bulgarian), taken from a previous experiment. The results show how the error distributions change passing from the MSA texts of worse MT quality to MSA texts of better MT quality, as well as a similarity in distinguishing the texts of better MT quality for all four languages.
KW - Arabic
KW - Machine translation evaluation
KW - Post-editing
UR - http://www.scopus.com/inward/record.url?scp=85037125069&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85037125069&partnerID=8YFLogxK
M3 - Conference contribution
AN - SCOPUS:85037125069
T3 - Proceedings of the 10th International Conference on Language Resources and Evaluation, LREC 2016
SP - 3644
EP - 3651
BT - Proceedings of the 10th International Conference on Language Resources and Evaluation, LREC 2016
A2 - Calzolari, Nicoletta
A2 - Choukri, Khalid
A2 - Mazo, Helene
A2 - Moreno, Asuncion
A2 - Declerck, Thierry
A2 - Goggi, Sara
A2 - Grobelnik, Marko
A2 - Odijk, Jan
A2 - Piperidis, Stelios
A2 - Maegaard, Bente
A2 - Mariani, Joseph
PB - European Language Resources Association (ELRA)
T2 - 10th International Conference on Language Resources and Evaluation, LREC 2016
Y2 - 23 May 2016 through 28 May 2016
ER -