OpenOPAF: An Open-Source Multimodal System for Automated Feedback for Oral Presentations

Xavier Ochoa, Heru Zhao

Research output: Contribution to journalArticlepeer-review

Abstract

Providing automated feedback that facilitates the practice and acquisition of oral presentation skills has been one of the notable applications of multimodal learning analytics (MmLA). However, the closedness and general unavailability of existing systems have reduced their potential impact and benefits. This work introduces OpenOPAF, an open-source system designed to provide automated multimodal feedback for oral presentations. By leveraging analytics to assess body language, gaze direction, voice volume, articulation speed, filled pauses, and the use of text in visual aids, it provides real-time, actionable information to presenters. Evaluations conducted on OpenOPAF show that it performs similarly, both technically and pedagogically, to existing closed solutions. This system targets practitioners who wish to use it as-is to provide feedback to novice presenters, developers seeking to adapt it for other learning contexts, and researchers interested in experimenting with new feature extraction algorithms and report mechanisms and studying the acquisition of oral presentation skills. This initiative aims to foster a community-driven approach to democratize access to sophisticated analytics tools for oral presentation skill development.

Original languageEnglish (US)
Pages (from-to)224-248
Number of pages25
JournalJournal of Learning Analytics
Volume11
Issue number3
DOIs
StatePublished - Dec 25 2024

Keywords

  • communication skills
  • multimodal learning analytics
  • Open-source tool

ASJC Scopus subject areas

  • Education
  • Computer Science Applications

Fingerprint

Dive into the research topics of 'OpenOPAF: An Open-Source Multimodal System for Automated Feedback for Oral Presentations'. Together they form a unique fingerprint.

Cite this