Virtual Studio Production Tools with Personalized Head Related Transfer Functions for Mixing and Monitoring Dolby Atmos and Multichannel Sound
×
Cite This
Citation & Abstract
K. Sunder, and S. Jain, "Virtual Studio Production Tools with Personalized Head Related Transfer Functions for Mixing and Monitoring Dolby Atmos and Multichannel Sound," Engineering Brief 674, (2022 May.). doi:
K. Sunder, and S. Jain, "Virtual Studio Production Tools with Personalized Head Related Transfer Functions for Mixing and Monitoring Dolby Atmos and Multichannel Sound," Engineering Brief 674, (2022 May.). doi:
Abstract: With the increasing popularity of audiophile headphones in this decade, the need for mixing over headphones is on the rise. Studio engineers use headphones as a critical tool for checking their mixes over the headphones before publishing them. As dolby atmos music and surround sound music is currently regaining popularity, there is also an increasing need for having multi channel speaker setups and associated gear in the studio to produce music in such formats. Such systems are extremely expensive and time consuming to set up. In this engineering brief, we present virtual studio production tools for mixing and monitoring atmos and multichannel sound with personalized head-related transfer functions (HRTFs). This paper talks in detail how the acoustics of the studio, including speaker, and headphone responses are captured accurately for a truly immersive experience. The acoustic fingerprint of the studio is then integrated with the personalized HRTFs predicted using machine learning algorithms that use an ear image as an input. Such novel tools will bring the power of personalized spatial audio and dolby atmos production in hands of millions of at-home mixing engineers and producers.
@article{sunder2022virtual,
author={sunder, kaushik and jain, sunder},
journal={journal of the audio engineering society},
title={virtual studio production tools with personalized head related transfer functions for mixing and monitoring dolby atmos and multichannel sound},
year={2022},
volume={},
number={},
pages={},
doi={},
month={may},}
@article{sunder2022virtual,
author={sunder, kaushik and jain, sunder},
journal={journal of the audio engineering society},
title={virtual studio production tools with personalized head related transfer functions for mixing and monitoring dolby atmos and multichannel sound},
year={2022},
volume={},
number={},
pages={},
doi={},
month={may},
abstract={with the increasing popularity of audiophile headphones in this decade, the need for mixing over headphones is on the rise. studio engineers use headphones as a critical tool for checking their mixes over the headphones before publishing them. as dolby atmos music and surround sound music is currently regaining popularity, there is also an increasing need for having multi channel speaker setups and associated gear in the studio to produce music in such formats. such systems are extremely expensive and time consuming to set up. in this engineering brief, we present virtual studio production tools for mixing and monitoring atmos and multichannel sound with personalized head-related transfer functions (hrtfs). this paper talks in detail how the acoustics of the studio, including speaker, and headphone responses are captured accurately for a truly immersive experience. the acoustic fingerprint of the studio is then integrated with the personalized hrtfs predicted using machine learning algorithms that use an ear image as an input. such novel tools will bring the power of personalized spatial audio and dolby atmos production in hands of millions of at-home mixing engineers and producers.},}
TY - paper
TI - Virtual Studio Production Tools with Personalized Head Related Transfer Functions for Mixing and Monitoring Dolby Atmos and Multichannel Sound
SP -
EP -
AU - Sunder, Kaushik
AU - Jain, Sunder
PY - 2022
JO - Journal of the Audio Engineering Society
IS -
VO -
VL -
Y1 - May 2022
TY - paper
TI - Virtual Studio Production Tools with Personalized Head Related Transfer Functions for Mixing and Monitoring Dolby Atmos and Multichannel Sound
SP -
EP -
AU - Sunder, Kaushik
AU - Jain, Sunder
PY - 2022
JO - Journal of the Audio Engineering Society
IS -
VO -
VL -
Y1 - May 2022
AB - With the increasing popularity of audiophile headphones in this decade, the need for mixing over headphones is on the rise. Studio engineers use headphones as a critical tool for checking their mixes over the headphones before publishing them. As dolby atmos music and surround sound music is currently regaining popularity, there is also an increasing need for having multi channel speaker setups and associated gear in the studio to produce music in such formats. Such systems are extremely expensive and time consuming to set up. In this engineering brief, we present virtual studio production tools for mixing and monitoring atmos and multichannel sound with personalized head-related transfer functions (HRTFs). This paper talks in detail how the acoustics of the studio, including speaker, and headphone responses are captured accurately for a truly immersive experience. The acoustic fingerprint of the studio is then integrated with the personalized HRTFs predicted using machine learning algorithms that use an ear image as an input. Such novel tools will bring the power of personalized spatial audio and dolby atmos production in hands of millions of at-home mixing engineers and producers.
With the increasing popularity of audiophile headphones in this decade, the need for mixing over headphones is on the rise. Studio engineers use headphones as a critical tool for checking their mixes over the headphones before publishing them. As dolby atmos music and surround sound music is currently regaining popularity, there is also an increasing need for having multi channel speaker setups and associated gear in the studio to produce music in such formats. Such systems are extremely expensive and time consuming to set up. In this engineering brief, we present virtual studio production tools for mixing and monitoring atmos and multichannel sound with personalized head-related transfer functions (HRTFs). This paper talks in detail how the acoustics of the studio, including speaker, and headphone responses are captured accurately for a truly immersive experience. The acoustic fingerprint of the studio is then integrated with the personalized HRTFs predicted using machine learning algorithms that use an ear image as an input. Such novel tools will bring the power of personalized spatial audio and dolby atmos production in hands of millions of at-home mixing engineers and producers.
Authors:
Sunder, Kaushik; Jain, Sunder
Affiliation:
Embody, San Mateo, CA, USA
AES Convention:
152 (May 2022)eBrief:674
Publication Date:
May 2, 2022Import into BibTeX
Subject:
Binaural Audio
Permalink:
http://www.aes.org/e-lib/browse.cfm?elib=21735
The Engineering Briefs at this Convention were
selected on the basis of a submitted synopsis,
ensuring that they are of interest to AES members,
and are not overly commercial. These briefs have
been reproduced from the authors' advance
manuscripts, without editing, corrections, or
consideration by the Review Board. The AES takes no
responsibility for their contents. Paper copies are
not available, but any member can freely access
these briefs. Members are encouraged to provide
comments that enhance their usefulness.