Publication: Modeling Musical Mood from Audio Features and Listening Context on an In-Situ Dataset

Real-life listening experiences contain a wide range of music types and genres. We create the first model of mu- sical mood using a data set gathered in-situ during a us- er's daily life. We show that while audio features, song lyrics and socially created tags can be used to successful- ly model musical mood with classification accuracies greater than chance, adding contextual information such as the listener's affective state or listening context can improve classification accuracy. We successfully classify musical arousal with a classification accuracy of 67% and musical valence with an accuracy of 75% when using both musical features and listening context.

Downloads

PDF

Participants

Diane Watson
University of Waterloo
Regan Mandryk
University of Saskatchewan

Citation

Watson, D., Mandryk, R.L. 2012. Modeling Musical Mood from Audio Features and Listening Context on an In-Situ Dataset. In ISMIR 2012, Porto, Portugal. 31-36.

BibTeX

@inproceedings {266-ISMIR2012cameraready1.1,
author= {Diane Watson and Regan Mandryk},
title= {Modeling Musical Mood from Audio Features and Listening Context on an In-Situ Dataset},
booktitle= {ISMIR 2012},
year= {2012},
address= {Porto, Portugal},
pages= {31-36}
}