Spatialized Additive Synthesis of Environmental Sounds
×
Cite This
Citation & Abstract
C. Verron, M. Aramaki, R. Kronland-Martinet, and G. Pallone, "Spatialized Additive Synthesis of Environmental Sounds," Paper 7509, (2008 October.). doi:
C. Verron, M. Aramaki, R. Kronland-Martinet, and G. Pallone, "Spatialized Additive Synthesis of Environmental Sounds," Paper 7509, (2008 October.). doi:
Abstract: In virtual auditory environment, sound sources are typically created in two stages: the dry monophonic signal is synthesized, and then, the spatial attributes (like source position, size and directivity) are applied by specific signal processing algorithms. In this paper we present an architecture that combines additive sound synthesis and 3D positional audio at the same level of sound generation. Our algorithm is based on inverse fast Fourier transform synthesis and amplitude-based sound positioning. It allows synthesizing and spatializing efficiently sinusoids and colored noise, to simulate point-like and extended sound sources. The audio rendering can be adapted to any reproduction system (headphones, stereo, 5.1 etc.). Possibilities offered by the algorithm are illustrated with environmental sounds.
@article{verron2008spatialized,
author={verron, charles and aramaki, mitsuko and kronland-martinet, richard and pallone, grégory},
journal={journal of the audio engineering society},
title={spatialized additive synthesis of environmental sounds},
year={2008},
volume={},
number={},
pages={},
doi={},
month={october},}
@article{verron2008spatialized,
author={verron, charles and aramaki, mitsuko and kronland-martinet, richard and pallone, grégory},
journal={journal of the audio engineering society},
title={spatialized additive synthesis of environmental sounds},
year={2008},
volume={},
number={},
pages={},
doi={},
month={october},
abstract={in virtual auditory environment, sound sources are typically created in two stages: the dry monophonic signal is synthesized, and then, the spatial attributes (like source position, size and directivity) are applied by specific signal processing algorithms. in this paper we present an architecture that combines additive sound synthesis and 3d positional audio at the same level of sound generation. our algorithm is based on inverse fast fourier transform synthesis and amplitude-based sound positioning. it allows synthesizing and spatializing efficiently sinusoids and colored noise, to simulate point-like and extended sound sources. the audio rendering can be adapted to any reproduction system (headphones, stereo, 5.1 etc.). possibilities offered by the algorithm are illustrated with environmental sounds.},}
TY - paper
TI - Spatialized Additive Synthesis of Environmental Sounds
SP -
EP -
AU - Verron, Charles
AU - Aramaki, Mitsuko
AU - Kronland-Martinet, Richard
AU - Pallone, Grégory
PY - 2008
JO - Journal of the Audio Engineering Society
IS -
VO -
VL -
Y1 - October 2008
TY - paper
TI - Spatialized Additive Synthesis of Environmental Sounds
SP -
EP -
AU - Verron, Charles
AU - Aramaki, Mitsuko
AU - Kronland-Martinet, Richard
AU - Pallone, Grégory
PY - 2008
JO - Journal of the Audio Engineering Society
IS -
VO -
VL -
Y1 - October 2008
AB - In virtual auditory environment, sound sources are typically created in two stages: the dry monophonic signal is synthesized, and then, the spatial attributes (like source position, size and directivity) are applied by specific signal processing algorithms. In this paper we present an architecture that combines additive sound synthesis and 3D positional audio at the same level of sound generation. Our algorithm is based on inverse fast Fourier transform synthesis and amplitude-based sound positioning. It allows synthesizing and spatializing efficiently sinusoids and colored noise, to simulate point-like and extended sound sources. The audio rendering can be adapted to any reproduction system (headphones, stereo, 5.1 etc.). Possibilities offered by the algorithm are illustrated with environmental sounds.
In virtual auditory environment, sound sources are typically created in two stages: the dry monophonic signal is synthesized, and then, the spatial attributes (like source position, size and directivity) are applied by specific signal processing algorithms. In this paper we present an architecture that combines additive sound synthesis and 3D positional audio at the same level of sound generation. Our algorithm is based on inverse fast Fourier transform synthesis and amplitude-based sound positioning. It allows synthesizing and spatializing efficiently sinusoids and colored noise, to simulate point-like and extended sound sources. The audio rendering can be adapted to any reproduction system (headphones, stereo, 5.1 etc.). Possibilities offered by the algorithm are illustrated with environmental sounds.
Authors:
Verron, Charles; Aramaki, Mitsuko; Kronland-Martinet, Richard; Pallone, Grégory
Affiliations:
Orange Labs; Laboratoire de Mécanique et d'Acoustique; Institut de Neurosciences Cognitives de la Méditerranée(See document for exact affiliation information.)
AES Convention:
125 (October 2008)
Paper Number:
7509
Publication Date:
October 1, 2008Import into BibTeX
Subject:
Analysis and Synthesis of Sound
Permalink:
http://www.aes.org/e-lib/browse.cfm?elib=14661