Personalized Productive Engagement Recognition in Robot-Mediated Collaborative Learning

Vetha Vikashini Chithrra Raghuram, Hanan Salam, Jauwairia Nasir, Barbara Bruno, Oya Celiktutan

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

In this paper, we propose and compare personalized models for Productive Engagement (PE) recognition. PE is defined as the level of engagement that maximizes learning. Previously, in the context of robot-mediated collaborative learning, a framework of productive engagement was developed by utilizing multimodal data of 32 dyads and learning profiles, namely, Expressive Explorers (EE), Calm Tinkerers (CT), and Silent Wanderers (SW) were identified which categorize learners according to their learning gain. Within the same framework, a PE score was constructed in a non-supervised manner for real-time evaluation. Here, we use these profiles and the PE score within an AutoML deep learning framework to personalize PE models. We investigate two approaches for this purpose: (1) Single-task Deep Neural Architecture Search (ST-NAS), and (2) Multitask NAS (MT-NAS). In the former approach, personalized models for each learner profile are learned from multimodal features and compared to non-personalized models. In the MT-NAS approach, we investigate whether jointly classifying the learners' profiles with the engagement score through multi-task learning would serve as an implicit personalization of PE. Moreover, we compare the predictive power of two types of features: incremental and non-incremental features. Non-incremental features correspond to features computed from the participant's behaviours in fixed time windows. Incremental features are computed by accounting to the behaviour from the beginning of the learning activity till the time window where productive engagement is observed. Our experimental results show that (1) personalized models improve the recognition performance with respect to non-personalized models when training models for the gainer vs. non-gainer groups, (2) multitask NAS (implicit personalization) also outperforms non-personalized models, (3) the speech modality has high contribution towards prediction, and (4) non-incremental features outperform the incremental ones overall.

Original languageEnglish (US)
Title of host publicationICMI 2022 - Proceedings of the 2022 International Conference on Multimodal Interaction
PublisherAssociation for Computing Machinery
Pages632-641
Number of pages10
ISBN (Electronic)9781450393904
DOIs
StatePublished - Nov 7 2022
Event24th ACM International Conference on Multimodal Interaction, ICMI 2022 - Bangalore, India
Duration: Nov 7 2022Nov 11 2022

Publication series

NameACM International Conference Proceeding Series

Conference

Conference24th ACM International Conference on Multimodal Interaction, ICMI 2022
Country/TerritoryIndia
CityBangalore
Period11/7/2211/11/22

Keywords

  • Embodied Interaction
  • Engagement Prediction
  • Human-robot/Agent Interaction
  • Personalization
  • Personalized Affective Computing
  • Social Robotics in Education
  • Social Signals

ASJC Scopus subject areas

  • Human-Computer Interaction
  • Computer Networks and Communications
  • Computer Vision and Pattern Recognition
  • Software

Fingerprint

Dive into the research topics of 'Personalized Productive Engagement Recognition in Robot-Mediated Collaborative Learning'. Together they form a unique fingerprint.

Cite this