Automatic Vocal Percussion Transcription Aimed at Mobile Music Production
×
Cite This
Citation & Abstract
HÉ. A.. Sánchez-Hevia, C. Llerena-Aguilar, G. Ramos-Auñón, and R. Gil-Pita, "Automatic Vocal Percussion Transcription Aimed at Mobile Music Production," Paper 9352, (2015 May.). doi:
HÉ. A.. Sánchez-Hevia, C. Llerena-Aguilar, G. Ramos-Auñón, and R. Gil-Pita, "Automatic Vocal Percussion Transcription Aimed at Mobile Music Production," Paper 9352, (2015 May.). doi:
Abstract: In this paper we present an automatic vocal percussion transcription system aimed to be an alternative to touchscreen input for drum and percussion programming. The objective of the system is to simplify the workflow of the user by letting him create percussive tracks made up of different samples triggered by his own voice without the need of any demanding skill by creating a system tailored to his specific needs. The system consists of four stages: event detection, feature extraction, and classification. We are employing small user-generated databases to adapt to particular vocalizations while avoiding overfitting and maintaining computational complexity as low as possible.
@article{sánchez-hevia2015automatic,
author={sánchez-hevia, héctor a. and llerena-aguilar, cosme and ramos-auñón, guillermo and gil-pita, roberto},
journal={journal of the audio engineering society},
title={automatic vocal percussion transcription aimed at mobile music production},
year={2015},
volume={},
number={},
pages={},
doi={},
month={may},}
@article{sánchez-hevia2015automatic,
author={sánchez-hevia, héctor a. and llerena-aguilar, cosme and ramos-auñón, guillermo and gil-pita, roberto},
journal={journal of the audio engineering society},
title={automatic vocal percussion transcription aimed at mobile music production},
year={2015},
volume={},
number={},
pages={},
doi={},
month={may},
abstract={in this paper we present an automatic vocal percussion transcription system aimed to be an alternative to touchscreen input for drum and percussion programming. the objective of the system is to simplify the workflow of the user by letting him create percussive tracks made up of different samples triggered by his own voice without the need of any demanding skill by creating a system tailored to his specific needs. the system consists of four stages: event detection, feature extraction, and classification. we are employing small user-generated databases to adapt to particular vocalizations while avoiding overfitting and maintaining computational complexity as low as possible.},}
TY - paper
TI - Automatic Vocal Percussion Transcription Aimed at Mobile Music Production
SP -
EP -
AU - Sánchez-Hevia, Héctor A.
AU - Llerena-Aguilar, Cosme
AU - Ramos-Auñón, Guillermo
AU - Gil-Pita, Roberto
PY - 2015
JO - Journal of the Audio Engineering Society
IS -
VO -
VL -
Y1 - May 2015
TY - paper
TI - Automatic Vocal Percussion Transcription Aimed at Mobile Music Production
SP -
EP -
AU - Sánchez-Hevia, Héctor A.
AU - Llerena-Aguilar, Cosme
AU - Ramos-Auñón, Guillermo
AU - Gil-Pita, Roberto
PY - 2015
JO - Journal of the Audio Engineering Society
IS -
VO -
VL -
Y1 - May 2015
AB - In this paper we present an automatic vocal percussion transcription system aimed to be an alternative to touchscreen input for drum and percussion programming. The objective of the system is to simplify the workflow of the user by letting him create percussive tracks made up of different samples triggered by his own voice without the need of any demanding skill by creating a system tailored to his specific needs. The system consists of four stages: event detection, feature extraction, and classification. We are employing small user-generated databases to adapt to particular vocalizations while avoiding overfitting and maintaining computational complexity as low as possible.
In this paper we present an automatic vocal percussion transcription system aimed to be an alternative to touchscreen input for drum and percussion programming. The objective of the system is to simplify the workflow of the user by letting him create percussive tracks made up of different samples triggered by his own voice without the need of any demanding skill by creating a system tailored to his specific needs. The system consists of four stages: event detection, feature extraction, and classification. We are employing small user-generated databases to adapt to particular vocalizations while avoiding overfitting and maintaining computational complexity as low as possible.
Authors:
Sánchez-Hevia, Héctor A.; Llerena-Aguilar, Cosme; Ramos-Auñón, Guillermo; Gil-Pita, Roberto
Affiliation:
University of Alcala, Alcalá de Henares, Madrid, Spain
AES Convention:
138 (May 2015)
Paper Number:
9352
Publication Date:
May 6, 2015Import into BibTeX
Subject:
Semantic Audio
Permalink:
http://www.aes.org/e-lib/browse.cfm?elib=17776