TY - GEN
T1 - Modeling affective responses to music using audio signal analysis and physiology
AU - Trochidis, Konstantinos
AU - Lui, Simon
N1 - Publisher Copyright:
© Springer International Publishing Switzerland 2016.
PY - 2016
Y1 - 2016
N2 - A key issue in designing personalized music affective applications is to find effective ways to direct emotion by music selection with appropriate combination of acoustic features. The aim of this study is to understand the dynamic relationships between acoustic features, physiology and affective states. To model these relationships we used a multivariate approach including continuous measures of emotions from behavioral, subjective and physiological responses. Classical music excerpts taken from opera overtures were used as stimuli to induce emotional variations across time between neutral and intense emotional states. Continuous ratings of arousal and valence along with cardiovascular, respiratory, skin conductance and facial expressive activity were recorded simultaneously. Results show that parts of the music with higher loudness and pulse clarity induced higher ratings of arousal, sympathetic activation and increased cardiorespiratory synchronization. In contrast, pleasant and calming parts with major mode and prominent key strength induced higher ratings of valence, parasympathetic activation and increased facial activity.
AB - A key issue in designing personalized music affective applications is to find effective ways to direct emotion by music selection with appropriate combination of acoustic features. The aim of this study is to understand the dynamic relationships between acoustic features, physiology and affective states. To model these relationships we used a multivariate approach including continuous measures of emotions from behavioral, subjective and physiological responses. Classical music excerpts taken from opera overtures were used as stimuli to induce emotional variations across time between neutral and intense emotional states. Continuous ratings of arousal and valence along with cardiovascular, respiratory, skin conductance and facial expressive activity were recorded simultaneously. Results show that parts of the music with higher loudness and pulse clarity induced higher ratings of arousal, sympathetic activation and increased cardiorespiratory synchronization. In contrast, pleasant and calming parts with major mode and prominent key strength induced higher ratings of valence, parasympathetic activation and increased facial activity.
KW - Acoustic features
KW - Affective computing
KW - Emotion recognition
KW - Musical emotion
KW - Physiological responses
UR - http://www.scopus.com/inward/record.url?scp=84990026568&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=84990026568&partnerID=8YFLogxK
U2 - 10.1007/978-3-319-46282-0_22
DO - 10.1007/978-3-319-46282-0_22
M3 - Conference contribution
AN - SCOPUS:84990026568
SN - 9783319462813
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 346
EP - 357
BT - Music, Mind, and Embodiment - 11th International Symposium, CMMR 2015, Revised Selected Papers
A2 - Kronland-Martinet, Richard
A2 - Aramaki, Mitsuko
A2 - Ystad, Sølvi
PB - Springer Verlag
T2 - 11th International Symposium on Computer Music Multidisciplinary Research, CMMR 2015
Y2 - 16 June 2015 through 19 June 2015
ER -