TY - JOUR
T1 - OpenOPAF
T2 - An Open-Source Multimodal System for Automated Feedback for Oral Presentations
AU - Ochoa, Xavier
AU - Zhao, Heru
N1 - Publisher Copyright:
© 2024, Society for Learning Analytics Research (SOLAR). All rights reserved.
PY - 2024/12/25
Y1 - 2024/12/25
N2 - Providing automated feedback that facilitates the practice and acquisition of oral presentation skills has been one of the notable applications of multimodal learning analytics (MmLA). However, the closedness and general unavailability of existing systems have reduced their potential impact and benefits. This work introduces OpenOPAF, an open-source system designed to provide automated multimodal feedback for oral presentations. By leveraging analytics to assess body language, gaze direction, voice volume, articulation speed, filled pauses, and the use of text in visual aids, it provides real-time, actionable information to presenters. Evaluations conducted on OpenOPAF show that it performs similarly, both technically and pedagogically, to existing closed solutions. This system targets practitioners who wish to use it as-is to provide feedback to novice presenters, developers seeking to adapt it for other learning contexts, and researchers interested in experimenting with new feature extraction algorithms and report mechanisms and studying the acquisition of oral presentation skills. This initiative aims to foster a community-driven approach to democratize access to sophisticated analytics tools for oral presentation skill development.
AB - Providing automated feedback that facilitates the practice and acquisition of oral presentation skills has been one of the notable applications of multimodal learning analytics (MmLA). However, the closedness and general unavailability of existing systems have reduced their potential impact and benefits. This work introduces OpenOPAF, an open-source system designed to provide automated multimodal feedback for oral presentations. By leveraging analytics to assess body language, gaze direction, voice volume, articulation speed, filled pauses, and the use of text in visual aids, it provides real-time, actionable information to presenters. Evaluations conducted on OpenOPAF show that it performs similarly, both technically and pedagogically, to existing closed solutions. This system targets practitioners who wish to use it as-is to provide feedback to novice presenters, developers seeking to adapt it for other learning contexts, and researchers interested in experimenting with new feature extraction algorithms and report mechanisms and studying the acquisition of oral presentation skills. This initiative aims to foster a community-driven approach to democratize access to sophisticated analytics tools for oral presentation skill development.
KW - communication skills
KW - multimodal learning analytics
KW - Open-source tool
UR - http://www.scopus.com/inward/record.url?scp=85213681879&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85213681879&partnerID=8YFLogxK
U2 - 10.18608/jla.2024.8411
DO - 10.18608/jla.2024.8411
M3 - Article
AN - SCOPUS:85213681879
SN - 1929-7750
VL - 11
SP - 224
EP - 248
JO - Journal of Learning Analytics
JF - Journal of Learning Analytics
IS - 3
ER -