User-independent Accelerometer Gesture Recognition for Participatory Mobile Music
×
Cite This
Citation & Abstract
G. Roma, A. Xambó, and J. Freeman, "User-independent Accelerometer Gesture Recognition for Participatory Mobile Music," J. Audio Eng. Soc., vol. 66, no. 6, pp. 430-438, (2018 June.). doi: https://doi.org/10.17743/jaes.2018.0026
G. Roma, A. Xambó, and J. Freeman, "User-independent Accelerometer Gesture Recognition for Participatory Mobile Music," J. Audio Eng. Soc., vol. 66 Issue 6 pp. 430-438, (2018 June.). doi: https://doi.org/10.17743/jaes.2018.0026
Abstract: With the widespread use of smartphones that have multiple sensors and sound processing capabilities, there is a great potential for increased audience participation in music performances. This paper proposes a framework for participatory mobile music based on mapping arbitrary accelerometer gestures to sound synthesizers. The authors describe Handwaving, a system based on neural networks for real-time gesture recognition and sonification on mobile browsers. Based on a multiuser dataset, results show that training with data from multiple users improves classification accuracy, supporting the use of the proposed algorithm for user-independent gesture recognition. This illustrates the relevance of user-independent training for multiuser settings, especially in participatory music. The system is implemented using web standards, which makes it simple and quick to deploy software on audience devices in live performance settings.
@article{roma2018user-independent,
author={roma, gerard and xambó, anna and freeman, jason},
journal={journal of the audio engineering society},
title={user-independent accelerometer gesture recognition for participatory mobile music},
year={2018},
volume={66},
number={6},
pages={430-438},
doi={https://doi.org/10.17743/jaes.2018.0026},
month={june},}
@article{roma2018user-independent,
author={roma, gerard and xambó, anna and freeman, jason},
journal={journal of the audio engineering society},
title={user-independent accelerometer gesture recognition for participatory mobile music},
year={2018},
volume={66},
number={6},
pages={430-438},
doi={https://doi.org/10.17743/jaes.2018.0026},
month={june},
abstract={with the widespread use of smartphones that have multiple sensors and sound processing capabilities, there is a great potential for increased audience participation in music performances. this paper proposes a framework for participatory mobile music based on mapping arbitrary accelerometer gestures to sound synthesizers. the authors describe handwaving, a system based on neural networks for real-time gesture recognition and sonification on mobile browsers. based on a multiuser dataset, results show that training with data from multiple users improves classification accuracy, supporting the use of the proposed algorithm for user-independent gesture recognition. this illustrates the relevance of user-independent training for multiuser settings, especially in participatory music. the system is implemented using web standards, which makes it simple and quick to deploy software on audience devices in live performance settings.},}
TY - paper
TI - User-independent Accelerometer Gesture Recognition for Participatory Mobile Music
SP - 430
EP - 438
AU - Roma, Gerard
AU - Xambó, Anna
AU - Freeman, Jason
PY - 2018
JO - Journal of the Audio Engineering Society
IS - 6
VO - 66
VL - 66
Y1 - June 2018
TY - paper
TI - User-independent Accelerometer Gesture Recognition for Participatory Mobile Music
SP - 430
EP - 438
AU - Roma, Gerard
AU - Xambó, Anna
AU - Freeman, Jason
PY - 2018
JO - Journal of the Audio Engineering Society
IS - 6
VO - 66
VL - 66
Y1 - June 2018
AB - With the widespread use of smartphones that have multiple sensors and sound processing capabilities, there is a great potential for increased audience participation in music performances. This paper proposes a framework for participatory mobile music based on mapping arbitrary accelerometer gestures to sound synthesizers. The authors describe Handwaving, a system based on neural networks for real-time gesture recognition and sonification on mobile browsers. Based on a multiuser dataset, results show that training with data from multiple users improves classification accuracy, supporting the use of the proposed algorithm for user-independent gesture recognition. This illustrates the relevance of user-independent training for multiuser settings, especially in participatory music. The system is implemented using web standards, which makes it simple and quick to deploy software on audience devices in live performance settings.
With the widespread use of smartphones that have multiple sensors and sound processing capabilities, there is a great potential for increased audience participation in music performances. This paper proposes a framework for participatory mobile music based on mapping arbitrary accelerometer gestures to sound synthesizers. The authors describe Handwaving, a system based on neural networks for real-time gesture recognition and sonification on mobile browsers. Based on a multiuser dataset, results show that training with data from multiple users improves classification accuracy, supporting the use of the proposed algorithm for user-independent gesture recognition. This illustrates the relevance of user-independent training for multiuser settings, especially in participatory music. The system is implemented using web standards, which makes it simple and quick to deploy software on audience devices in live performance settings.
Authors:
Roma, Gerard; Xambó, Anna; Freeman, Jason
Affiliations:
University of Huddersfield, Huddersfield, UK; Queen Mary University of London, London, UK; Georgia Institute of Technology, Atlanta, GA, USA(See document for exact affiliation information.) JAES Volume 66 Issue 6 pp. 430-438; June 2018
Publication Date:
June 18, 2018Import into BibTeX
Permalink:
http://www.aes.org/e-lib/browse.cfm?elib=19582