Classification of Spatial Audio Location and Content Using Convolutional Neural Networks
×
Cite This
Citation & Abstract
T. Hirvonen, "Classification of Spatial Audio Location and Content Using Convolutional Neural Networks," Paper 9294, (2015 May.). doi:
T. Hirvonen, "Classification of Spatial Audio Location and Content Using Convolutional Neural Networks," Paper 9294, (2015 May.). doi:
Abstract: This paper investigates the use of Convolutional Neural Networks for spatial audio classification. In contrast to traditional methods that use hand-engineered features and algorithms, we show that a Convolutional Network in combination with generic preprocessing can give good results and allows for specialization to challenging conditions. The method can adapt to e.g. different source distances and microphone arrays, as well as estimate both spatial location and audio content type jointly. For example, with typical single-source material in a simulated reverberant room, we can achieve cross-validation accuracy of 94.3% for 40-ms frames across 16 classes (eight spatial directions, content type speech vs. music).
@article{hirvonen2015classification,
author={hirvonen, toni},
journal={journal of the audio engineering society},
title={classification of spatial audio location and content using convolutional neural networks},
year={2015},
volume={},
number={},
pages={},
doi={},
month={may},}
@article{hirvonen2015classification,
author={hirvonen, toni},
journal={journal of the audio engineering society},
title={classification of spatial audio location and content using convolutional neural networks},
year={2015},
volume={},
number={},
pages={},
doi={},
month={may},
abstract={this paper investigates the use of convolutional neural networks for spatial audio classification. in contrast to traditional methods that use hand-engineered features and algorithms, we show that a convolutional network in combination with generic preprocessing can give good results and allows for specialization to challenging conditions. the method can adapt to e.g. different source distances and microphone arrays, as well as estimate both spatial location and audio content type jointly. for example, with typical single-source material in a simulated reverberant room, we can achieve cross-validation accuracy of 94.3% for 40-ms frames across 16 classes (eight spatial directions, content type speech vs. music).},}
TY - paper
TI - Classification of Spatial Audio Location and Content Using Convolutional Neural Networks
SP -
EP -
AU - Hirvonen, Toni
PY - 2015
JO - Journal of the Audio Engineering Society
IS -
VO -
VL -
Y1 - May 2015
TY - paper
TI - Classification of Spatial Audio Location and Content Using Convolutional Neural Networks
SP -
EP -
AU - Hirvonen, Toni
PY - 2015
JO - Journal of the Audio Engineering Society
IS -
VO -
VL -
Y1 - May 2015
AB - This paper investigates the use of Convolutional Neural Networks for spatial audio classification. In contrast to traditional methods that use hand-engineered features and algorithms, we show that a Convolutional Network in combination with generic preprocessing can give good results and allows for specialization to challenging conditions. The method can adapt to e.g. different source distances and microphone arrays, as well as estimate both spatial location and audio content type jointly. For example, with typical single-source material in a simulated reverberant room, we can achieve cross-validation accuracy of 94.3% for 40-ms frames across 16 classes (eight spatial directions, content type speech vs. music).
This paper investigates the use of Convolutional Neural Networks for spatial audio classification. In contrast to traditional methods that use hand-engineered features and algorithms, we show that a Convolutional Network in combination with generic preprocessing can give good results and allows for specialization to challenging conditions. The method can adapt to e.g. different source distances and microphone arrays, as well as estimate both spatial location and audio content type jointly. For example, with typical single-source material in a simulated reverberant room, we can achieve cross-validation accuracy of 94.3% for 40-ms frames across 16 classes (eight spatial directions, content type speech vs. music).
Author:
Hirvonen, Toni
Affiliation:
Dolby Laboratories, Stockholm, Sweden
AES Convention:
138 (May 2015)
Paper Number:
9294
Publication Date:
May 6, 2015Import into BibTeX
Subject:
Sound Localization and Separation
Permalink:
http://www.aes.org/e-lib/browse.cfm?elib=17718