TY - JOUR
T1 - Perception, performance, and detectability of conversational artificial intelligence across 32 university courses
AU - Ibrahim, Hazem
AU - Liu, Fengyuan
AU - Asim, Rohail
AU - Battu, Balaraju
AU - Benabderrahmane, Sidahmed
AU - Alhafni, Bashar
AU - Adnan, Wifag
AU - Alhanai, Tuka
AU - AlShebli, Bedoor
AU - Baghdadi, Riyadh
AU - Bélanger, Jocelyn J.
AU - Beretta, Elena
AU - Celik, Kemal
AU - Chaqfeh, Moumena
AU - Daqaq, Mohammed F.
AU - Bernoussi, Zaynab El
AU - Fougnie, Daryl
AU - Garcia de Soto, Borja
AU - Gandolfi, Alberto
AU - Gyorgy, Andras
AU - Habash, Nizar
AU - Harris, J. Andrew
AU - Kaufman, Aaron
AU - Kirousis, Lefteris
AU - Kocak, Korhan
AU - Lee, Kangsan
AU - Lee, Seungah S.
AU - Malik, Samreen
AU - Maniatakos, Michail
AU - Melcher, David
AU - Mourad, Azzam
AU - Park, Minsu
AU - Rasras, Mahmoud
AU - Reuben, Alicja
AU - Zantout, Dania
AU - Gleason, Nancy W.
AU - Makovi, Kinga
AU - Rahwan, Talal
AU - Zaki, Yasir
N1 - Publisher Copyright:
© 2023, The Author(s).
PY - 2023/12
Y1 - 2023/12
N2 - The emergence of large language models has led to the development of powerful tools such as ChatGPT that can produce text indistinguishable from human-generated work. With the increasing accessibility of such technology, students across the globe may utilize it to help with their school work—a possibility that has sparked ample discussion on the integrity of student evaluation processes in the age of artificial intelligence (AI). To date, it is unclear how such tools perform compared to students on university-level courses across various disciplines. Further, students’ perspectives regarding the use of such tools in school work, and educators’ perspectives on treating their use as plagiarism, remain unknown. Here, we compare the performance of the state-of-the-art tool, ChatGPT, against that of students on 32 university-level courses. We also assess the degree to which its use can be detected by two classifiers designed specifically for this purpose. Additionally, we conduct a global survey across five countries, as well as a more in-depth survey at the authors’ institution, to discern students’ and educators’ perceptions of ChatGPT’s use in school work. We find that ChatGPT’s performance is comparable, if not superior, to that of students in a multitude of courses. Moreover, current AI-text classifiers cannot reliably detect ChatGPT’s use in school work, due to both their propensity to classify human-written answers as AI-generated, as well as the relative ease with which AI-generated text can be edited to evade detection. Finally, there seems to be an emerging consensus among students to use the tool, and among educators to treat its use as plagiarism. Our findings offer insights that could guide policy discussions addressing the integration of artificial intelligence into educational frameworks.
AB - The emergence of large language models has led to the development of powerful tools such as ChatGPT that can produce text indistinguishable from human-generated work. With the increasing accessibility of such technology, students across the globe may utilize it to help with their school work—a possibility that has sparked ample discussion on the integrity of student evaluation processes in the age of artificial intelligence (AI). To date, it is unclear how such tools perform compared to students on university-level courses across various disciplines. Further, students’ perspectives regarding the use of such tools in school work, and educators’ perspectives on treating their use as plagiarism, remain unknown. Here, we compare the performance of the state-of-the-art tool, ChatGPT, against that of students on 32 university-level courses. We also assess the degree to which its use can be detected by two classifiers designed specifically for this purpose. Additionally, we conduct a global survey across five countries, as well as a more in-depth survey at the authors’ institution, to discern students’ and educators’ perceptions of ChatGPT’s use in school work. We find that ChatGPT’s performance is comparable, if not superior, to that of students in a multitude of courses. Moreover, current AI-text classifiers cannot reliably detect ChatGPT’s use in school work, due to both their propensity to classify human-written answers as AI-generated, as well as the relative ease with which AI-generated text can be edited to evade detection. Finally, there seems to be an emerging consensus among students to use the tool, and among educators to treat its use as plagiarism. Our findings offer insights that could guide policy discussions addressing the integration of artificial intelligence into educational frameworks.
UR - https://www.mendeley.com/catalogue/5b2fc6a6-ab2b-3046-854e-c6e470a6fd30/
U2 - 10.1038/s41598-023-38964-3
DO - 10.1038/s41598-023-38964-3
M3 - Article
C2 - 37620342
SN - 2045-2322
VL - 13
JO - Scientific reports
JF - Scientific reports
IS - 1
M1 - 12187
ER -