TY - GEN
T1 - M4GT-Bench
T2 - 62nd Annual Meeting of the Association for Computational Linguistics, ACL 2024
AU - Wang, Yuxia
AU - Mansurov, Jonibek
AU - Ivanov, Petar
AU - Su, Jinyan
AU - Shelmanov, Artem
AU - Tsvigun, Akim
AU - Afzal, Osama Mohammed
AU - Mahmoud, Tarek
AU - Puccetti, Giovanni
AU - Arnold, Thomas
AU - Aji, Alham Fikri
AU - Habash, Nizar
AU - Gurevych, Iryna
AU - Nakov, Preslav
N1 - Publisher Copyright:
© 2024 Association for Computational Linguistics.
PY - 2024
Y1 - 2024
N2 - The advent of Large Language Models (LLMs) has brought an unprecedented surge in machine-generated text (MGT) across diverse channels. This raises legitimate concerns about its potential misuse and societal implications. The need to identify and differentiate such content from genuine human-generated text is critical in combating disinformation, preserving the integrity of education and scientific fields, and maintaining trust in communication. In this work, we address this problem by introducing a new benchmark based on a multilingual, multi-domain, and multi-generator corpus of MGTs - M4GT-Bench. The benchmark is compiled of three tasks: (1) mono-lingual and multi-lingual binary MGT detection; (2) multi-way detection where one need to identify, which particular model generated the text; and (3) mixed human-machine text detection, where a word boundary delimiting MGT from human-written content should be determined. On the developed benchmark, we have tested several MGT detection baselines and also conducted an evaluation of human performance. We see that obtaining good performance in MGT detection usually requires an access to the training data from the same domain and generators. The benchmark is available at https://github.com/mbzuai-nlp/M4GT-Bench.
AB - The advent of Large Language Models (LLMs) has brought an unprecedented surge in machine-generated text (MGT) across diverse channels. This raises legitimate concerns about its potential misuse and societal implications. The need to identify and differentiate such content from genuine human-generated text is critical in combating disinformation, preserving the integrity of education and scientific fields, and maintaining trust in communication. In this work, we address this problem by introducing a new benchmark based on a multilingual, multi-domain, and multi-generator corpus of MGTs - M4GT-Bench. The benchmark is compiled of three tasks: (1) mono-lingual and multi-lingual binary MGT detection; (2) multi-way detection where one need to identify, which particular model generated the text; and (3) mixed human-machine text detection, where a word boundary delimiting MGT from human-written content should be determined. On the developed benchmark, we have tested several MGT detection baselines and also conducted an evaluation of human performance. We see that obtaining good performance in MGT detection usually requires an access to the training data from the same domain and generators. The benchmark is available at https://github.com/mbzuai-nlp/M4GT-Bench.
UR - http://www.scopus.com/inward/record.url?scp=85203795171&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85203795171&partnerID=8YFLogxK
U2 - 10.18653/v1/2024.acl-long.218
DO - 10.18653/v1/2024.acl-long.218
M3 - Conference contribution
AN - SCOPUS:85203795171
T3 - Proceedings of the Annual Meeting of the Association for Computational Linguistics
SP - 3964
EP - 3992
BT - Long Papers
A2 - Ku, Lun-Wei
A2 - Martins, Andre F. T.
A2 - Srikumar, Vivek
PB - Association for Computational Linguistics (ACL)
Y2 - 11 August 2024 through 16 August 2024
ER -