TY - GEN
T1 - Evolving in-game mood-expressive music with MetaCompose
AU - Scirea, Marco
AU - Togelius, Julian
AU - Eklund, Peter
AU - Risi, Sebastian
N1 - Publisher Copyright:
© 2018 Association for Computing Machinery.
PY - 2018/9/12
Y1 - 2018/9/12
N2 - MetaCompose is a music generator based on a hybrid evolutionary technique that combines FI-2POP and multi-objective optimization. In this paper we employ the MetaCompose music generator to create music in real-time that expresses different mood-states in a game-playing environment (Checkers). In particular, this paper focuses on determining if differences in player experience can be observed when: (i) using affective-dynamic music compared to static music, and (ii) the music supports the game's internal narrative/state. Participants were tasked to play two games of Checkers while listening to two (out of three) different set-ups of game-related generated music. The possible set-ups were: static expression, consistent affective expression, and random affective expression. During game-play players wore a E4 Wristband, allowing various physiological measures to be recorded such as blood volume pulse (BVP) and electromyographic activity (EDA). The data collected confirms a hypothesis based on three out of four criteria (engagement, music quality, coherency with game excitement, and coherency with performance) that players prefer dynamic affective music when asked to reflect on the current game-state. In the future this system could allow designers/composers to easily create affective and dynamic soundtracks for interactive applications.
AB - MetaCompose is a music generator based on a hybrid evolutionary technique that combines FI-2POP and multi-objective optimization. In this paper we employ the MetaCompose music generator to create music in real-time that expresses different mood-states in a game-playing environment (Checkers). In particular, this paper focuses on determining if differences in player experience can be observed when: (i) using affective-dynamic music compared to static music, and (ii) the music supports the game's internal narrative/state. Participants were tasked to play two games of Checkers while listening to two (out of three) different set-ups of game-related generated music. The possible set-ups were: static expression, consistent affective expression, and random affective expression. During game-play players wore a E4 Wristband, allowing various physiological measures to be recorded such as blood volume pulse (BVP) and electromyographic activity (EDA). The data collected confirms a hypothesis based on three out of four criteria (engagement, music quality, coherency with game excitement, and coherency with performance) that players prefer dynamic affective music when asked to reflect on the current game-state. In the future this system could allow designers/composers to easily create affective and dynamic soundtracks for interactive applications.
KW - Affective expression
KW - Evolutionary algorithms
KW - Music generation
UR - http://www.scopus.com/inward/record.url?scp=85060905429&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85060905429&partnerID=8YFLogxK
U2 - 10.1145/3243274.3243292
DO - 10.1145/3243274.3243292
M3 - Conference contribution
AN - SCOPUS:85060905429
T3 - ACM International Conference Proceeding Series
BT - Audio Mostly - A Conference on Interaction with Sound - 2018 Sound in Immersion and Emotion, AM 2018 - Conference Proceedings
PB - Association for Computing Machinery
T2 - 2018 International Audio Mostly Conference - A Conference on Interaction with Sound: Sound in Immersion and Emotion, AM 2018
Y2 - 12 September 2018 through 14 September 2018
ER -