Deep Neural Network Based HRTF Personalization Using Anthropometric Measurements
×
Cite This
Citation & Abstract
CH. JU. Chun, JU. MI. Moon, GE. WO. Lee, NA. KY. Kim, and HO. KO. Kim, "Deep Neural Network Based HRTF Personalization Using Anthropometric Measurements," Paper 9860, (2017 October.). doi:
CH. JU. Chun, JU. MI. Moon, GE. WO. Lee, NA. KY. Kim, and HO. KO. Kim, "Deep Neural Network Based HRTF Personalization Using Anthropometric Measurements," Paper 9860, (2017 October.). doi:
Abstract: A head-related transfer function (HRTF) is a very simple and powerful tool for producing spatial sound by filtering monaural sound. It represents the effects of the head, body, and pinna as well as the pathway from a given source position to a listener’s ears. Unfortunately, while the characteristics of HRTF differ slightly from person to person, it is usual to use the HRIR that is averaged over all the subjects. In addition, it is difficult to measure individual HRTFs for all horizontal and vertical directions. Thus, this paper proposes a deep neural network (DNN)-based HRTF personalization method using anthropometric measurements. To this end, the CIPIC HRTF database, which is a public domain database of HRTF measurements, is analyzed to generate a DNN model for HRTF personalization. The input features for the DNN are taken as the anthropometric measurements, including the head, torso, and pinna information. Additionally, the output labels are taken as the head-related impulse response (HRIR) samples of a left ear. The performance of the proposed method is evaluated by computing the root-mean-square error (RMSE) and log-spectral distortion (LSD) between the referenced HRIR and the estimated one by the proposed method. Consequently, it is shown that the RMSE and LSD for the estimated HRIR are smaller than those of the HRIR averaged over all the subjects from the CIPIC HRTF database.
@article{chun2017deep,
author={chun, chan jun and moon, jung min and lee, geon woo and kim, nam kyun and kim, hong kook},
journal={journal of the audio engineering society},
title={deep neural network based hrtf personalization using anthropometric measurements},
year={2017},
volume={},
number={},
pages={},
doi={},
month={october},}
@article{chun2017deep,
author={chun, chan jun and moon, jung min and lee, geon woo and kim, nam kyun and kim, hong kook},
journal={journal of the audio engineering society},
title={deep neural network based hrtf personalization using anthropometric measurements},
year={2017},
volume={},
number={},
pages={},
doi={},
month={october},
abstract={a head-related transfer function (hrtf) is a very simple and powerful tool for producing spatial sound by filtering monaural sound. it represents the effects of the head, body, and pinna as well as the pathway from a given source position to a listener’s ears. unfortunately, while the characteristics of hrtf differ slightly from person to person, it is usual to use the hrir that is averaged over all the subjects. in addition, it is difficult to measure individual hrtfs for all horizontal and vertical directions. thus, this paper proposes a deep neural network (dnn)-based hrtf personalization method using anthropometric measurements. to this end, the cipic hrtf database, which is a public domain database of hrtf measurements, is analyzed to generate a dnn model for hrtf personalization. the input features for the dnn are taken as the anthropometric measurements, including the head, torso, and pinna information. additionally, the output labels are taken as the head-related impulse response (hrir) samples of a left ear. the performance of the proposed method is evaluated by computing the root-mean-square error (rmse) and log-spectral distortion (lsd) between the referenced hrir and the estimated one by the proposed method. consequently, it is shown that the rmse and lsd for the estimated hrir are smaller than those of the hrir averaged over all the subjects from the cipic hrtf database. },}
TY - paper
TI - Deep Neural Network Based HRTF Personalization Using Anthropometric Measurements
SP -
EP -
AU - Chun, Chan Jun
AU - Moon, Jung Min
AU - Lee, Geon Woo
AU - Kim, Nam Kyun
AU - Kim, Hong Kook
PY - 2017
JO - Journal of the Audio Engineering Society
IS -
VO -
VL -
Y1 - October 2017
TY - paper
TI - Deep Neural Network Based HRTF Personalization Using Anthropometric Measurements
SP -
EP -
AU - Chun, Chan Jun
AU - Moon, Jung Min
AU - Lee, Geon Woo
AU - Kim, Nam Kyun
AU - Kim, Hong Kook
PY - 2017
JO - Journal of the Audio Engineering Society
IS -
VO -
VL -
Y1 - October 2017
AB - A head-related transfer function (HRTF) is a very simple and powerful tool for producing spatial sound by filtering monaural sound. It represents the effects of the head, body, and pinna as well as the pathway from a given source position to a listener’s ears. Unfortunately, while the characteristics of HRTF differ slightly from person to person, it is usual to use the HRIR that is averaged over all the subjects. In addition, it is difficult to measure individual HRTFs for all horizontal and vertical directions. Thus, this paper proposes a deep neural network (DNN)-based HRTF personalization method using anthropometric measurements. To this end, the CIPIC HRTF database, which is a public domain database of HRTF measurements, is analyzed to generate a DNN model for HRTF personalization. The input features for the DNN are taken as the anthropometric measurements, including the head, torso, and pinna information. Additionally, the output labels are taken as the head-related impulse response (HRIR) samples of a left ear. The performance of the proposed method is evaluated by computing the root-mean-square error (RMSE) and log-spectral distortion (LSD) between the referenced HRIR and the estimated one by the proposed method. Consequently, it is shown that the RMSE and LSD for the estimated HRIR are smaller than those of the HRIR averaged over all the subjects from the CIPIC HRTF database.
A head-related transfer function (HRTF) is a very simple and powerful tool for producing spatial sound by filtering monaural sound. It represents the effects of the head, body, and pinna as well as the pathway from a given source position to a listener’s ears. Unfortunately, while the characteristics of HRTF differ slightly from person to person, it is usual to use the HRIR that is averaged over all the subjects. In addition, it is difficult to measure individual HRTFs for all horizontal and vertical directions. Thus, this paper proposes a deep neural network (DNN)-based HRTF personalization method using anthropometric measurements. To this end, the CIPIC HRTF database, which is a public domain database of HRTF measurements, is analyzed to generate a DNN model for HRTF personalization. The input features for the DNN are taken as the anthropometric measurements, including the head, torso, and pinna information. Additionally, the output labels are taken as the head-related impulse response (HRIR) samples of a left ear. The performance of the proposed method is evaluated by computing the root-mean-square error (RMSE) and log-spectral distortion (LSD) between the referenced HRIR and the estimated one by the proposed method. Consequently, it is shown that the RMSE and LSD for the estimated HRIR are smaller than those of the HRIR averaged over all the subjects from the CIPIC HRTF database.
Authors:
Chun, Chan Jun; Moon, Jung Min; Lee, Geon Woo; Kim, Nam Kyun; Kim, Hong Kook
Affiliations:
Korea Institute of Civil Engineering and Building Technology (KICT), Goyang, Korea; Gwangju Institute of Science and Technology (GIST), Gwangju. Korea; Gwangju Institute of Science and Technology (GIST), Gwangju, Korea; Gwangju Institute of Science and Tech (GIST), Gwangju, Korea(See document for exact affiliation information.)
AES Convention:
143 (October 2017)
Paper Number:
9860
Publication Date:
October 8, 2017Import into BibTeX
Subject:
Spatial Audio
Permalink:
http://www.aes.org/e-lib/browse.cfm?elib=19257