TY - GEN
T1 - RETRIEVING MUSICAL INFORMATION FROM NEURAL DATA
T2 - 23rd International Society for Music Information Retrieval Conference, ISMIR 2022
AU - Abrams, Ellie Bean
AU - Vidal, Eva Muñoz
AU - Pelofi, Claire
AU - Ripollés, Pablo
N1 - Publisher Copyright:
© E.B. Abrams, E.M. Vidal, C. Pelofi, and P. Ripollés.
PY - 2022
Y1 - 2022
N2 - Various features ± from low-level acoustics, to higher-level statistical regularities, to memory associations ± contribute to the experience of musical enjoyment and pleasure. Recent work suggests that musical surprisal, that is, the unexpectedness of a musical event given its context, may directly predict listeners’ experiences of pleasure and enjoyment during music listening. Understanding how surprisal shapes listeners’ preferences for certain musical pieces has implications for music recommender systems, which are typically content- (both acoustic or semantic) or metadata-based. Here we test a recently developed computational algorithm, called the Dynamic-Regularity Extraction (D-REX) model, that uses Bayesian inference to predict the surprisal that humans experience while listening to music. We demonstrate that the brain tracks musical surprisal as modeled by D-REX by conducting a decoding analysis on the neural signal (collected through magnetoencephalography) of participants listening to music. Thus, we demonstrate the validity of a computational model of musical surprisal, which may remarkably inform the next generation of recommender systems. In addition, we present an open-source neural dataset which will be available for future research to foster approaches combining MIR with cognitive neuroscience, an approach we believe will be a key strategy in characterizing people’s reactions to music.
AB - Various features ± from low-level acoustics, to higher-level statistical regularities, to memory associations ± contribute to the experience of musical enjoyment and pleasure. Recent work suggests that musical surprisal, that is, the unexpectedness of a musical event given its context, may directly predict listeners’ experiences of pleasure and enjoyment during music listening. Understanding how surprisal shapes listeners’ preferences for certain musical pieces has implications for music recommender systems, which are typically content- (both acoustic or semantic) or metadata-based. Here we test a recently developed computational algorithm, called the Dynamic-Regularity Extraction (D-REX) model, that uses Bayesian inference to predict the surprisal that humans experience while listening to music. We demonstrate that the brain tracks musical surprisal as modeled by D-REX by conducting a decoding analysis on the neural signal (collected through magnetoencephalography) of participants listening to music. Thus, we demonstrate the validity of a computational model of musical surprisal, which may remarkably inform the next generation of recommender systems. In addition, we present an open-source neural dataset which will be available for future research to foster approaches combining MIR with cognitive neuroscience, an approach we believe will be a key strategy in characterizing people’s reactions to music.
UR - http://www.scopus.com/inward/record.url?scp=85207833385&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85207833385&partnerID=8YFLogxK
M3 - Conference contribution
AN - SCOPUS:85207833385
T3 - Proceedings of the 23rd International Society for Music Information Retrieval Conference, ISMIR 2022
SP - 160
EP - 168
BT - Proceedings of the 23rd International Society for Music Information Retrieval Conference, ISMIR 2022
A2 - Rao, Preeti
A2 - Murthy, Hema
A2 - Srinivasamurthy, Ajay
A2 - Bittner, Rachel
A2 - Repetto, Rafael Caro
A2 - Goto, Masataka
A2 - Serra, Xavier
A2 - Miron, Marius
PB - International Society for Music Information Retrieval
Y2 - 4 December 2022 through 8 December 2022
ER -