D. Williams, V. Hodge, L. Gega, D. Murphy, P. Cowling, and A. Drachen, "AI and Automatic Music Generation for Mindfulness," Paper 84, (2019 March.). doi:
D. Williams, V. Hodge, L. Gega, D. Murphy, P. Cowling, and A. Drachen, "AI and Automatic Music Generation for Mindfulness," Paper 84, (2019 March.). doi:
Abstract: This paper presents an architecture for the creation of emotionally congruent music using machine learning aided sound synthesis. Our system can generate a small corpus of music using Hidden Markov Models; we can label the pieces with emotional tags using data elicited from questionnaires. This produces a corpus of labelled music underpinned by perceptual evaluations. We then analyse participant’s galvanic skin response (GSR) while listening to our generated music pieces and the emotions they describe in a questionnaire conducted after listening. These analyses reveal that there is a direct correlation between the calmness/scariness of a musical piece, the users’ GSR reading and the emotions they describe feeling. From these, we will be able to estimate an emotional state using biofeedback as a control signal for a machine-learning algorithm, which generates new musical structures according to a perceptually informed musical feature similarity model. Our case study suggests various applications including in gaming, automated soundtrack generation, and mindfulness.
@article{williams2019ai,
author={williams, duncan and hodge, victoria and gega, lina and murphy, damian and cowling, peter and drachen, anders},
journal={journal of the audio engineering society},
title={ai and automatic music generation for mindfulness},
year={2019},
volume={},
number={},
pages={},
doi={},
month={march},}
@article{williams2019ai,
author={williams, duncan and hodge, victoria and gega, lina and murphy, damian and cowling, peter and drachen, anders},
journal={journal of the audio engineering society},
title={ai and automatic music generation for mindfulness},
year={2019},
volume={},
number={},
pages={},
doi={},
month={march},
abstract={this paper presents an architecture for the creation of emotionally congruent music using machine learning aided sound synthesis. our system can generate a small corpus of music using hidden markov models; we can label the pieces with emotional tags using data elicited from questionnaires. this produces a corpus of labelled music underpinned by perceptual evaluations. we then analyse participant’s galvanic skin response (gsr) while listening to our generated music pieces and the emotions they describe in a questionnaire conducted after listening. these analyses reveal that there is a direct correlation between the calmness/scariness of a musical piece, the users’ gsr reading and the emotions they describe feeling. from these, we will be able to estimate an emotional state using biofeedback as a control signal for a machine-learning algorithm, which generates new musical structures according to a perceptually informed musical feature similarity model. our case study suggests various applications including in gaming, automated soundtrack generation, and mindfulness.},}
TY - paper
TI - AI and Automatic Music Generation for Mindfulness
SP -
EP -
AU - Williams, Duncan
AU - Hodge, Victoria
AU - Gega, Lina
AU - Murphy, Damian
AU - Cowling, Peter
AU - Drachen, Anders
PY - 2019
JO - Journal of the Audio Engineering Society
IS -
VO -
VL -
Y1 - March 2019
TY - paper
TI - AI and Automatic Music Generation for Mindfulness
SP -
EP -
AU - Williams, Duncan
AU - Hodge, Victoria
AU - Gega, Lina
AU - Murphy, Damian
AU - Cowling, Peter
AU - Drachen, Anders
PY - 2019
JO - Journal of the Audio Engineering Society
IS -
VO -
VL -
Y1 - March 2019
AB - This paper presents an architecture for the creation of emotionally congruent music using machine learning aided sound synthesis. Our system can generate a small corpus of music using Hidden Markov Models; we can label the pieces with emotional tags using data elicited from questionnaires. This produces a corpus of labelled music underpinned by perceptual evaluations. We then analyse participant’s galvanic skin response (GSR) while listening to our generated music pieces and the emotions they describe in a questionnaire conducted after listening. These analyses reveal that there is a direct correlation between the calmness/scariness of a musical piece, the users’ GSR reading and the emotions they describe feeling. From these, we will be able to estimate an emotional state using biofeedback as a control signal for a machine-learning algorithm, which generates new musical structures according to a perceptually informed musical feature similarity model. Our case study suggests various applications including in gaming, automated soundtrack generation, and mindfulness.
This paper presents an architecture for the creation of emotionally congruent music using machine learning aided sound synthesis. Our system can generate a small corpus of music using Hidden Markov Models; we can label the pieces with emotional tags using data elicited from questionnaires. This produces a corpus of labelled music underpinned by perceptual evaluations. We then analyse participant’s galvanic skin response (GSR) while listening to our generated music pieces and the emotions they describe in a questionnaire conducted after listening. These analyses reveal that there is a direct correlation between the calmness/scariness of a musical piece, the users’ GSR reading and the emotions they describe feeling. From these, we will be able to estimate an emotional state using biofeedback as a control signal for a machine-learning algorithm, which generates new musical structures according to a perceptually informed musical feature similarity model. Our case study suggests various applications including in gaming, automated soundtrack generation, and mindfulness.
Authors:
Williams, Duncan; Hodge, Victoria; Gega, Lina; Murphy, Damian; Cowling, Peter; Drachen, Anders
Affiliation:
University of York, York, UK
AES Conference:
2019 AES International Conference on Immersive and Interactive Audio (March 2019)
Paper Number:
84
Publication Date:
March 17, 2019Import into BibTeX
Permalink:
http://www.aes.org/e-lib/browse.cfm?elib=20439