Smart phones, voice assistants, and home robots are becoming more intelligent every day to support humans in their daily routines and tasks. Achieving the user acceptance and success of such technologies makes it necessary for them to be socially informed, responsive, and responsible. They need to understand human behaviour and socio-emotional states and adapt themselves to their user’s profiles (eg, personality) and preferences. Motivated by this, there has been a significant effort in recognising personality from multimodal data in the last decade ,. However, to the best of our knowledge, the methods so far have focused on one-fits-all approaches only and performed personality recognition without taking into consideration the user’s profiles (eg, gender and age). In this paper, we took a different approach, and we argued that one-fits-all approach does not work sufficiently for personality recognition as previous research showed that there are significant gender differences in personality traits. For example, women tend to report higher scores for extraversion, agreeableness and neuroticsm as compared to men . Building upon these findings, we first clustered the participants into two profiles based on their gender, namely, female and male, and then used Neural Architecture Search (NAS) to automatically design a model for each profile to recognise personality. A separate network was designed and trained with visual and text features. The final prediction was obtained by aggregating the results of both video and text modalities. Figure 1 presents the overview of our proposed approach.
|Original language||English (US)|
|Title of host publication||ICCV 2021 Understanding Social Behavior in Dyadic and Small Group Interactions Challenge|
|State||Published - 2021|