Sections

AES Section Meeting Reports

Argentina - March 9, 2021

Meeting Topic:

Moderator Name:

Speaker Name:

Meeting Location:

Summary

On Tuesday 9, 2PM Buenos Aires Argentina, 5PM Huddersfield UK, we had the privilege to receive Dr. Hyunkook Lee. A really other big "transatlantic" moment for us, last year we had Francis Rumsey, and now Hyunkook Lee -the man himself,- who prepared something specific concerning the MARRS app and 3D recording & listening using ESMA-3D and other resources. MARSS and ESMA-3D explained by the author! Also, as is if all of that were few, he shared more knowledge with us in a terrific Q&A afterwards. Here is a general topic-resumed meeting content:
-MARRS, the Microphone Array Recording and Reproduction Simulator, comes from a perceptual basis of source localization concerning interchannel level difference (ICLD) and interchannel time difference (ICTD) concepts. This research allowed the possibility of locating virtual sources visually on the stereo image, on the width dimension, concerning the stereo take used. And moreover, how sources change location on the horizontal plane when altering microphone spacing, angle, height and pitch (vertical orientation/tilt), as well as the real source position in the hypothetical recording.
-Concerning linear time and level trade-off functions the 30º angle applies to the 1ms or 17 dB factors of precise localization of one direction in the stereo image at the standard 60º monitoring angle in direct field critical listening position.
-ESMA-3D, the Equal Segment Microphone Array 3D can provide a much wider listening experience than Ambisonics on loudspeakers, it is also based on ILCD and time difference trade-off. The array is vertically coincident and horizontally spaced, with cardioids and 50cm spacing. The effect of vertical microphone spacing on spatial impression is not significant.
-In order to locate the source image at the height of the main speaker later, the vertical interchannel crosstalk needs to be suppressed by at least 7 dB for ICTD of 1-10ms or 9.5 dB for 0ms ICTD. Vertical source localization mainly relies on spectral cues.
-With supercardiods, like the in ORTF-3D it requires smaller space because it produces smaller level differences. For soundscapes, and Ambisonics microphone goes great for good binaural recording, but for speakers arrays likes the ESMA-3D or takes like the ORTF-3D provides more realistic results. Ambisonics binaural decoding goes great on 3er order or above for most kind of sounds, but for good externalization you need to have good BRIRs and/or good room acoustic convergence/simulations.
-Hybrid stereo/3D mixes goes well in this early exploration of pop/rock mixing in 3D. Things that traditionally goes well at the centre in mono/stereo -like vocals-, get strange/with lack of externalization on 3D. Other instruments/textures go better for out-of-the-head placement. Diffuse field equalized HRTFs needs to be playback on diffuse field equalized headphones for good externalization performance. In some experiments there is evidence that some people preferred other people HRTFs, so in terms of subjective analysis the individual HRTF subject is on initial analysis/research, also in terms on training: apparently trained subjects learn to perceive externalization without personalized HRTFs.
Thank you very much Hyunkook for sharing knowledge and your valuable time with us, with great predisposition and kindness.

Written By:

More About Argentina Section

AES - Audio Engineering Society