From Interactive to Adaptive Mood-Based Music Listening Experiences in Social or Personal Contexts
×
Cite This
Citation & Abstract
M. Barthet, G. Fazekas, A. Allik, F. Thalmann, and M. B.Sandler, "From Interactive to Adaptive Mood-Based Music Listening Experiences in Social or Personal Contexts," J. Audio Eng. Soc., vol. 64, no. 9, pp. 673-682, (2016 September.). doi: https://doi.org/10.17743/jaes.2016.0042
M. Barthet, G. Fazekas, A. Allik, F. Thalmann, and M. B.Sandler, "From Interactive to Adaptive Mood-Based Music Listening Experiences in Social or Personal Contexts," J. Audio Eng. Soc., vol. 64 Issue 9 pp. 673-682, (2016 September.). doi: https://doi.org/10.17743/jaes.2016.0042
Abstract: Listeners of audio are increasingly shifting to a participatory culture where technology allows them to modify and control the listening experience. This report describes the developments of a mood-driven music player, Moodplay, which incorporates semantic computing technologies for musical mood using social tags and informative and aesthetic browsing visualizations. The prototype runs with a dataset of over 10,000 songs covering various genres, arousal, and valence levels. Changes in the design of the system were made in response to user evaluations from over 120 participants in 15 different sectors of work or education. The proposed client/server architecture integrates modular components powered by semantic web technologies and audio content feature extraction. This enables recorded music content to be controlled in flexible and nonlinear ways. Dynamic music objects can be used to create mashups on the fly of two or more simultaneous songs to allow selection of multiple moods. The authors also consider nonlinear audio techniques that could transform the player into a creative tool, for instance, by reorganizing, compressing, or expanding temporally prerecorded content.
@article{barthet2016from,
author={barthet, mathieu and fazekas, györgy and allik, alo and thalmann, florian and b.sandler, mark},
journal={journal of the audio engineering society},
title={from interactive to adaptive mood-based music listening experiences in social or personal contexts},
year={2016},
volume={64},
number={9},
pages={673-682},
doi={https://doi.org/10.17743/jaes.2016.0042},
month={september},}
@article{barthet2016from,
author={barthet, mathieu and fazekas, györgy and allik, alo and thalmann, florian and b.sandler, mark},
journal={journal of the audio engineering society},
title={from interactive to adaptive mood-based music listening experiences in social or personal contexts},
year={2016},
volume={64},
number={9},
pages={673-682},
doi={https://doi.org/10.17743/jaes.2016.0042},
month={september},
abstract={listeners of audio are increasingly shifting to a participatory culture where technology allows them to modify and control the listening experience. this report describes the developments of a mood-driven music player, moodplay, which incorporates semantic computing technologies for musical mood using social tags and informative and aesthetic browsing visualizations. the prototype runs with a dataset of over 10,000 songs covering various genres, arousal, and valence levels. changes in the design of the system were made in response to user evaluations from over 120 participants in 15 different sectors of work or education. the proposed client/server architecture integrates modular components powered by semantic web technologies and audio content feature extraction. this enables recorded music content to be controlled in flexible and nonlinear ways. dynamic music objects can be used to create mashups on the fly of two or more simultaneous songs to allow selection of multiple moods. the authors also consider nonlinear audio techniques that could transform the player into a creative tool, for instance, by reorganizing, compressing, or expanding temporally prerecorded content.},}
TY - paper
TI - From Interactive to Adaptive Mood-Based Music Listening Experiences in Social or Personal Contexts
SP - 673
EP - 682
AU - Barthet, Mathieu
AU - Fazekas, György
AU - Allik, Alo
AU - Thalmann, Florian
AU - B.Sandler, Mark
PY - 2016
JO - Journal of the Audio Engineering Society
IS - 9
VO - 64
VL - 64
Y1 - September 2016
TY - paper
TI - From Interactive to Adaptive Mood-Based Music Listening Experiences in Social or Personal Contexts
SP - 673
EP - 682
AU - Barthet, Mathieu
AU - Fazekas, György
AU - Allik, Alo
AU - Thalmann, Florian
AU - B.Sandler, Mark
PY - 2016
JO - Journal of the Audio Engineering Society
IS - 9
VO - 64
VL - 64
Y1 - September 2016
AB - Listeners of audio are increasingly shifting to a participatory culture where technology allows them to modify and control the listening experience. This report describes the developments of a mood-driven music player, Moodplay, which incorporates semantic computing technologies for musical mood using social tags and informative and aesthetic browsing visualizations. The prototype runs with a dataset of over 10,000 songs covering various genres, arousal, and valence levels. Changes in the design of the system were made in response to user evaluations from over 120 participants in 15 different sectors of work or education. The proposed client/server architecture integrates modular components powered by semantic web technologies and audio content feature extraction. This enables recorded music content to be controlled in flexible and nonlinear ways. Dynamic music objects can be used to create mashups on the fly of two or more simultaneous songs to allow selection of multiple moods. The authors also consider nonlinear audio techniques that could transform the player into a creative tool, for instance, by reorganizing, compressing, or expanding temporally prerecorded content.
Listeners of audio are increasingly shifting to a participatory culture where technology allows them to modify and control the listening experience. This report describes the developments of a mood-driven music player, Moodplay, which incorporates semantic computing technologies for musical mood using social tags and informative and aesthetic browsing visualizations. The prototype runs with a dataset of over 10,000 songs covering various genres, arousal, and valence levels. Changes in the design of the system were made in response to user evaluations from over 120 participants in 15 different sectors of work or education. The proposed client/server architecture integrates modular components powered by semantic web technologies and audio content feature extraction. This enables recorded music content to be controlled in flexible and nonlinear ways. Dynamic music objects can be used to create mashups on the fly of two or more simultaneous songs to allow selection of multiple moods. The authors also consider nonlinear audio techniques that could transform the player into a creative tool, for instance, by reorganizing, compressing, or expanding temporally prerecorded content.
Open Access
Authors:
Barthet, Mathieu; Fazekas, György; Allik, Alo; Thalmann, Florian; B.Sandler, Mark
Affiliation:
Centre for Digital Music, School of Electronic Engineering and Computer Science, Queen Mary University of London, London, UK JAES Volume 64 Issue 9 pp. 673-682; September 2016
Publication Date:
September 19, 2016Import into BibTeX
Permalink:
http://www.aes.org/e-lib/browse.cfm?elib=18376