SoundTorch: Quick Browsing in Large Audio Collections
×
Cite This
Citation & Abstract
S. Heise, M. Hlatky, and J. Loviscach, "SoundTorch: Quick Browsing in Large Audio Collections," Paper 7544, (2008 October.). doi:
S. Heise, M. Hlatky, and J. Loviscach, "SoundTorch: Quick Browsing in Large Audio Collections," Paper 7544, (2008 October.). doi:
Abstract: Musicians, sound engineers, and foley artists face the challenge of finding appropriate sounds in vast collections containing thousands of audio files. Imprecise naming and tagging forces users to review dozens of files in order to pick the right sound. Acoustic matching is not necessarily helpful here as it needs a sound exemplar to match with and may miss relevant files. Hence, we propose to combine acoustic content analysis with accelerated auditioning: Audio files are automatically arranged in 2D by psychoacoustic similarity. A user can shine a virtual flashlight onto this representation; all sounds in the light cone are played back simultaneously, their position indicated through surround sound. User tests show that this method can leverage the human brain's capability to single out sounds from a spatial mixture and enhance browsing in large collections of audio content.
@article{heise2008soundtorch:,
author={heise, sebastian and hlatky, michael and loviscach, jörn},
journal={journal of the audio engineering society},
title={soundtorch: quick browsing in large audio collections},
year={2008},
volume={},
number={},
pages={},
doi={},
month={october},}
@article{heise2008soundtorch:,
author={heise, sebastian and hlatky, michael and loviscach, jörn},
journal={journal of the audio engineering society},
title={soundtorch: quick browsing in large audio collections},
year={2008},
volume={},
number={},
pages={},
doi={},
month={october},
abstract={musicians, sound engineers, and foley artists face the challenge of finding appropriate sounds in vast collections containing thousands of audio files. imprecise naming and tagging forces users to review dozens of files in order to pick the right sound. acoustic matching is not necessarily helpful here as it needs a sound exemplar to match with and may miss relevant files. hence, we propose to combine acoustic content analysis with accelerated auditioning: audio files are automatically arranged in 2d by psychoacoustic similarity. a user can shine a virtual flashlight onto this representation; all sounds in the light cone are played back simultaneously, their position indicated through surround sound. user tests show that this method can leverage the human brain's capability to single out sounds from a spatial mixture and enhance browsing in large collections of audio content.},}
TY - paper
TI - SoundTorch: Quick Browsing in Large Audio Collections
SP -
EP -
AU - Heise, Sebastian
AU - Hlatky, Michael
AU - Loviscach, Jörn
PY - 2008
JO - Journal of the Audio Engineering Society
IS -
VO -
VL -
Y1 - October 2008
TY - paper
TI - SoundTorch: Quick Browsing in Large Audio Collections
SP -
EP -
AU - Heise, Sebastian
AU - Hlatky, Michael
AU - Loviscach, Jörn
PY - 2008
JO - Journal of the Audio Engineering Society
IS -
VO -
VL -
Y1 - October 2008
AB - Musicians, sound engineers, and foley artists face the challenge of finding appropriate sounds in vast collections containing thousands of audio files. Imprecise naming and tagging forces users to review dozens of files in order to pick the right sound. Acoustic matching is not necessarily helpful here as it needs a sound exemplar to match with and may miss relevant files. Hence, we propose to combine acoustic content analysis with accelerated auditioning: Audio files are automatically arranged in 2D by psychoacoustic similarity. A user can shine a virtual flashlight onto this representation; all sounds in the light cone are played back simultaneously, their position indicated through surround sound. User tests show that this method can leverage the human brain's capability to single out sounds from a spatial mixture and enhance browsing in large collections of audio content.
Musicians, sound engineers, and foley artists face the challenge of finding appropriate sounds in vast collections containing thousands of audio files. Imprecise naming and tagging forces users to review dozens of files in order to pick the right sound. Acoustic matching is not necessarily helpful here as it needs a sound exemplar to match with and may miss relevant files. Hence, we propose to combine acoustic content analysis with accelerated auditioning: Audio files are automatically arranged in 2D by psychoacoustic similarity. A user can shine a virtual flashlight onto this representation; all sounds in the light cone are played back simultaneously, their position indicated through surround sound. User tests show that this method can leverage the human brain's capability to single out sounds from a spatial mixture and enhance browsing in large collections of audio content.
Authors:
Heise, Sebastian; Hlatky, Michael; Loviscach, Jörn
Affiliation:
Hochschule Bremen (University of Applied Sciences)
AES Convention:
125 (October 2008)
Paper Number:
7544
Publication Date:
October 1, 2008Import into BibTeX
Subject:
Audio Content Management
Permalink:
http://www.aes.org/e-lib/browse.cfm?elib=14696