Presentation skills estimation based on video and kinect data analysis

Vanessa Echeverría, Allan Avendaño, Katherine Chiluiza, Aníbal Vásquez, Xavier Ochoa

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

This paper identifies, by means of video and Kinect data, a set of predictors that estimate the presentation skills of 448 individual students. Two evaluation criteria were predicted: eye contact and posture and body language. Machine-learning evaluations resulted in models that predicted the perfor- mance level (good or poor) of the presenters with 68% and 63% of correctly classified instances, for eye contact and postures and body language criteria, respectively. Furthermore, the results suggest that certain features, such as arms movement and smoothness, provide high significance on predicting the level of development for presentation skills. The paper finishes with conclusions and related ideas for future work.

Original languageEnglish (US)
Title of host publicationMLA 2014 - Proceedings of the 2014 ACM Multimodal Learning Analytics Workshop and Grand Challenge, Co-located with ICMI 2014
PublisherAssociation for Computing Machinery
Pages53-60
Number of pages8
ISBN (Electronic)9781450304887
DOIs
StatePublished - Nov 12 2014
Event3rd Multimodal Learning Analytics Workshop and Grand Challenges, MLA 2014 - Istanbul, Turkey
Duration: Nov 12 2014Nov 12 2014

Publication series

NameMLA 2014 - Proceedings of the 2014 ACM Multimodal Learning Analytics Workshop and Grand Challenge, Co-located with ICMI 2014

Other

Other3rd Multimodal Learning Analytics Workshop and Grand Challenges, MLA 2014
Country/TerritoryTurkey
CityIstanbul
Period11/12/1411/12/14

Keywords

  • Multimodal
  • Presentation skills
  • Video features

ASJC Scopus subject areas

  • Computer Science Applications
  • Education

Fingerprint

Dive into the research topics of 'Presentation skills estimation based on video and kinect data analysis'. Together they form a unique fingerprint.

Cite this