Events

2018 AES International Conference on Spatial Reproduction — Aesthetics and Science

Programs

| Home | Call for Contribution | Program | Registration | Venue | Travel | Sponsors | Committee Members |
Spatial Audio

 

Program

Please note: The following list of programs may be subject to change without notice.

Day0 (August 6th)

Technical Tour [14:00–16:00]

CD-607 Mixing studio for 22.2 Multi-channel Audio, NHK Broadcasting Center
Capacity: 15 people (Preregistration will be required)

Welcome Concert [18:00–20:00]

Japanese Traditional Music. Studio A, Senju Campus, Tokyo University of the Arts

Day1(August 7th)

Opening Ceremony [9:00–9:30]

Welcome address “Future of IoT Era” by Hiroshi Yasuda (President, Tokyo Denki University)

Keynote Addresses 1 [9:30–10:15]

“The WOW Factor” — Jim Anderson (Anderson Audio)

Who has Wow?
What is Wow? When can I get Wow?
Where is Wow? Why is Wow needed? How can I get Wow?

Over one hundred years ago, audiences experienced “Wow” listening to a singer and comparing the sound with a recording. Observers found that it was “almost impossible to tell the difference” between what was live sound and what was recorded. Sixty years ago, the transition from monaural sound to stereophonic brought ‘realism’ into listener's homes and today audiences can be immersed in sound. This talk will trace a history of how listeners have been educated and entertained to the latest sonic developments and said to themselves and each other: “Wow!”

Paper Presentation

P1: Binaural-1 [10:30–12:10]
P1-1 “Evaluation of Robustness of Dynamic Crosstalk Cancellation for Binaural Reproduction,” Ryo Matsuda (Kyoto University), Makoto Otani (Kyoto University), and Hiraku Okumura (Yamaha Corporation / Kyoto University)
P1-2 “Optimization and prediction of the spherical and ellipsoidal ITD model parameters using offset ears,” Helene Bahu (Dysonics), and David Romblom (Dysonics)
P1-3 “The effects of dynamic and spectral cue on auditory vertical localization,” Jianliang Jiang (South China University of Technology), Bousn Xie (Acoustic Lab., School of Physics and Optoeletronics), Haiming Mai (South China University of Technology), Lulu Liu (South China University of Technology), Kailing Yi (South China University of Technology), and Chengyun Zhang (School of Mechanical & Electrical Engineering, Guangzhou University)
P1-4 “Detection of a nearby wall in a virtual echolocation scenario based on measured and simulated OBRIRs,” Annika Neidhardt (Technische Universität Ilmenau)
P2: Binaural-2 [13:10–14:00]
P2-1 “Dataset of near-distance head-related transfer functions calculated using the boundary element method,” César Salvador (Tohoku University), Shuichi Sakamoto (Tohoku University), Jorge Treviño (Tohoku University), and Yôiti Suzuki (Tohoku University)
P2-2 “Estimation of the spectral notch frequency of the individual early head-related transfer function in the upper median plane based on the anthropometry of the listener's pinnae,” Kazuhiro Iida (Chiba Institute of Technology), Masato Oota (Chiba Institute of Technology), and Hikaru Shimazaki (Chiba Institute of Technology)
P3: Psychoacoustics-1 [16:00–17:15]
P3-1 “Evaluation of the frequency dependent influence of direct sound on the effect of sound projection,” Tom Wühle (TU Dresden), Sebastian Merchel (TU Dresden), and Ercan Altinsoy (TU Dresden)
P3-2 “Influence of the frequency dependent directivity of a sound projector on the localization of projected sound,” Sebastian Merchel (TU Dresden), Tom Wühle (TU Dresden), Felix Reichmann (TU Dresden), and Ercan Altinsoy (TU Dresden)
P3-3 “A quantitative evaluation of media device orchestration for immersive spatial audio reproduction,” James Woodcock (University of Salford), Jon Francombe (BBC Research and Development), Richard Hughes (University of Salford), Russell Mason (University of Surrey), William J. Davies (University of Salford), and Trevor J. Cox (University of Salford)
P4: Psychoacoustics-2 [17:20–18:30]
P4-1 “Energy Aware Modelling of Inter-Aural Level Difference Distortion Impact on Spatial Audio Perception” Pablo Delgado (International Audio Laboratories Erlangen), Jürgen Herre (Fraunhofer), Armin Taghipour (Fraunhofer), and Nadja Schinkel-Bielefeld (Fraunhofer)
P4-2 “The effect of characteristics of sound source on the perceived differences in the number of playback channels of multi-channel audio reproduction system,” Misaki Hasuo (WOWOW Inc.), Toru Kamekawa (Tokyo University of the Arts), and Atsushi Marui (Tokyo University of the Arts)
P4-3 “Discrimination of auditory spatial diffuseness facilitated by head rolling while listening to ‘with-height’ versus ‘without-height’ multichannel loudspeaker reproduction,” William Martens (The University of Sydney) and Yutong Han (The University of Sydney)
Poster-1 (Full Paper) [14:15–15:45]
PP-1 “Challenges of distributed real-time finite-difference time-domain room acoustic simulation for auralization,” Jukka Saarelma (Aalto University), Jonathan Califa (Oculus VR, Facebook), and Ravish Mehra (Oculus VR, Facebook)
PP-2 “The Number of Virtual Loudspeakers and the Error for Spherical Microphone Array Recording and Binaural Rendering,” Jianliang Jiang (South China University of Technology), Bousn Xie (Acoustic Lab., School of Physics and Optoeletronics), and Haiming Mai (South China University of Technology)
PP-3 “Effect of Individual cue on distance perception for nearby sound sources in static virtual reproduction system,” Liliang Wang (South China University of Technology) and Guangzheng Yu (South China University of Technology)
PP-4 “Controlling The Apparent Source Size In Ambisonics Using Decorrelation Filters,” Timothy Schmele (Eurecat) and Umut Sayin (Eurecat)
PP-5 “Sound field renderer with loudspeaker array using real-time convolver,” Takao Tsuchiya (Doshisha Univ.), Issei Takenuki (Doshisha Univ.), and Kyousuke Sugiura (Doshisha Univ.)
PP-6 “Proposal of a sound source separation method using image signal processing of a spatio-temporal sound pressure distribution image,” Kenji Ozawa (University of Yamanashi), Masaaki Ito (University of Yamanashi), Genya Shimizu (University of Yamanashi), Masanori Morise (University of Yamanashi), and Shuichi Sakamoto (Tohoku University)

Workshop

  • W1 [14:30–15:45]: “The Real World of Immersive Surround Production Techniques in JAPAN”, Mick Sawaguchi (UNAMAS-Label), Hideo Irimajiri (WOWOW)"
  • W2 [16:00–17:15]: “Techniques for recording and mixing pop, rock, and jazz music for 22.2 Multichannel Sound”, Will Howie, (McGill University)
  • W3 [17:30–18:30]: “Microphone arrangement comparison of orchestral instrument for recording producer, balance engineer education”, Thorsten Weigelt (Berlin University of the Arts), Kazuya Nagae (Nagoya University of Arts)

Tutorial

  • T1 [10:30–11:45]: “Psychoacoustics of 3D sound recording and reproduction”, Hyunkook Lee, (University of Huddersfield)
  • T2 [13:00–14:15]: “New Surround and Immersive Recordings”, Jim Anderson, Ulrike Schwarz (Anderson Audio), and Kimio Hamasaki (ARTSRIDGE LLC)

Day2 (August 8th)

Keynote Addresses 2 [9:00–9:45]

“Rendering of multichannel audio”, Akio Ando (University of Toyama)

Recent multichannel audio can capture the sound field with higher sense of reality. Since the audio format includes the detailed spatial information about the sound field in the production, the same loudspeaker arrangement with the production may be unnecessary in the reproduction. Instead of setting up such loudspeakers, audio rendering is required in the reproduction.

Many studies have been made on the audio rendering. They can handle the direct sound, because these methods were based on the directional information of channels. What seems to be lacking, however, is the rendering method of diffuse sound. In this presentation, we introduce the audio rendering method based on the decomposition of audio signal into direct and diffuse components, and clarify the requirements for the rendering of diffuse sound from the viewpoint of sound reflection in the room.

Paper Presentation

P5: Game Audio [10:00–12:05]
P5-1 “Personality and listening sensitivity correlates of the subjective response to real and simulated sound sources in virtual reality,” J. William Bailey (University of Salford), and Bruno Miguel Fazenda (University of Salford)
P5-2 “A Low Cost Method for Applying Acoustic Features to Each Sound in Virtual 3D Space Using Layered Quadtree Grids,” Kenji Kojima (CAPCOM Co., Ltd.) and Takahiro Kitagawa (CAPCOM Co., Ltd.)
P5-3 “Proximity representation of VR world — Realization of the real-time binaural for video games,” Kentaro Nakashima (CAPCOM Co., Ltd.), Shotaro Nakayama (CAPCOM Co., Ltd.), and Joji Kuriyama (J.TESORI Co., Ltd.)
P5-4 “Impact of HRTF individualization on player performance in a VR shooter game,” David Poirier-Quinot (Sorbonne Université, CNRS, Institut Jean Le Rond d'Alembert) and Brian Katz (Sorbonne Université, CNRS, Institut Jean Le Rond d'Alembert)
P5-5 “The Effect of Virtual Environments on Localization during a 3D Audio Production Task,” Matthew Boerum (McGill University), Jack Kelly (McGill University), Diego Quiroz (McGill University), and Patrick Cheiban (McGill University)
P6: Psychoacoustics-3 [14:45–15:35]
P6-1 “Recognition of an auditory environment: investigating room-induced influences on immersive experience,” Sungyoung Kim (Rochester Institute of Technology), Richard King (McGill University), Toru Kamekawa (Tokyo University of the Arts), and Shuichi Sakamoto (Tohoku University)
P6-2 “On Human Perceptual Bandwidth and Slow Listening,” Thomas Lund (Genelec OY) and Aki Mäkivirta (Genelec OY)
P7: Ambisonics-1 [15:45–17:00]
P7-1 “Directional filtering of Ambisonic sound scenes,” Pierre Lecomte (Centre Lyonnais d'Acoustique), Philippe-Aubert Gauthier (Université de Sherbrooke), Alain Berry (Université de Sherbrooke), Alexandre Garcia (Conservatoire National des Arts et Métiers), and Christophe Langrenne (Conservatoire National des Arts et Métiers)
P7-2 “Spatialization Pipeline for Digital Audio Workstations with Three Spatialization Techniques,” Tanner Upthegrove (Virginia Tech), Charles Nichols (Virginia Tech), and Michael Roan (Virginia Tech)
P7-3 “Size of the accurate-reproduction area as function of directional metrics in spherical harmonic domain,” Pierre Grandjean (Université de Sherbrooke), Philippe-Aubert Gauthier (Université de Sherbrooke), and Alain Berry (Université de Sherbrooke)
P8: Ambisonics-2, VR [17:10–18:00]
P8-1 “Extending the listening region of high-order Ambisonics through series acceleration,” Jorge Treviño (Tohoku University), Shuichi Sakamoto (Tohoku University), Yôiti Suzuki (Tohoku University)
P8-2 “Psychophysical validation of binaurally processed sound superimposed upon environmental sound via an unobstructed pinna and an open-ear-canal earspeaker,” Frederico Pereira (The University of Sydney) and William Martens (The University of Sydney)
Poster-2 (eB) [13:15–14:30]
EB1-1 “Acoustic characteristics of headphones and earphones,” Yoshiharu Soeta (National Institute of Advanced Industrial Science and Technology) and Sho Ooi (National Institute of Advanced Industrial Science and Technology)
EB1-2 “Tetrahedral Microphone: A Versatile “Spot” and Ambience Receiver for 3D Mixing and Sound Design,” Wieslaw Woszczyk (McGill University), Jennifer Nulsen (McGill University), Ephraim Hahn (McGill University), Haruka Nagata (McGill University), Kseniya Degtyareva (McGill University), and John Castillo (McGill University)
EB1-3 “A Mid-Air Gestural Controller for the Pyramix 3D panner,” Diego I Quiroz Orozco (McGill University / CIRMMT)
EB1-4 “Recording and mixing technics for ambisonics,” Tomoyuki Nishiyama (NHK)
EB1-5 “Analysis of Sound Localization on Median Plane by using Specular Reflection Shape Features,” Tatsuya Shibata (Tokyo Denki University), Yuko Watanabe (Tokyo Denki University), and Akito Michishita (Route Theta)
EB1-6 “Movie Scene Classification using Audio Signals in a Home-Theater System,” Yuta Yuyama (Yamaha Corporation), Sungyoung Kim (Rochester Institute of Technology), and Hiraku Okumura (Yamaha Corporation)
EB1-7 “Accurate spatialization of VR sound sources in the near field,” Camilo Arévalo (Pontificia Universidad Javeriana Cali), Gerardo M. Sarria M. (Pontificia Universidad Javeriana Cali), and Julián Villegas (University of Aizu)
EB1-8 “Creating a highly-realistic “Acoustic Vessel Odyssey” using Sound Field Synthesis with 576 Loudspeakers,” Yuki Mitsufuji (Sony Corporation), Asako Tomura (Sony Corporation), and Kazunobu Ohkuri (Sony Corporation)
EB1-9 “Loudspeaker Positions with Sufficient Natural Channel Separation for Binaural Reproduction,” Kat Young (University of York), Gavin Kearney (University of York), and Anthony I. Tew (University of York)
EB1-10 “Quality discrimination on high-resolution audio with difference of quantization accuracy by sound-image localization,” Akio Suguro (Hachinohe Institute of Technology) and Masanobu Miura (Hachinohe Institute of Technology)
EB1-11 “Three-dimensional large-scale loudspeaker array system using high-speed 1-bit signal for immersive sound field reproduction,” Kakeru Kurokawa (Tokyo Denki University), Izumi Tsunokuni (Tokyo Denki University), Yusuke Ikeda (Tokyo Denki University), Naotoshi Osaka (Tokyo Denki University), and Yasuhiro Oikawa (Waseda University)
EB1-12 “Sound field reproduction in prism-type arrangement of loudspeaker array by using local sound field synthesis,” Izumi Tsunokuni (Tokyo Denki University), Kakeru Kurokawa (Tokyo Denki University), Yusuke Ikeda (Tokyo Denki University), and Naotoshi Osaka (Tokyo Denki University)
EB1-13 “Multipresence and Autofocus for Interpreted Narrowcasting” Michael Cohen (University of Aizu) and Kojima Hiromasa (University of Aizu)

Workshop

  • W4 [10:00–12:00]: “Microphone Techniques for 3D sound recording”, Helmut Wittek (SCHOEPS Mikrofone GmbH), Will Howie (McGill University), Thorsten Weigelt (Berlin University of the Arts), Florian Camerer (ORF), Toru Kamekawa (Tokyo University of Arts), Kimio Hamasaki (ARTSRIDGE LLC), Hyunkook Lee (University of Huddersfield)
  • W5 [13:15–14:30]: “Upmix and downmix technique for 3D sound recording and reproduction”, Toru Kamekawa (Tokyo University of the Arts), Yo Sasaki (NHK STRL), Rafael Kassier (HARMAN Lifestyle Division)
  • W6 [16:00–17:00]: “Object-Based Audio workflow for spatial broadcast productions”, Matthieu Parmentier (France TV), Dominique Brulhart (Merging Technologies)
  • W7 [17:10–18:10]: “Live spatial and Object-Based Audio production”, Matthieu Parmentier (France TV)

Tutorial

  • T3 [14:45–15:45]: “Ambisonic Recording and Mixing for Live Music Performance in 3D space”, Masato Ushijima (Sonologic-Design)

Student Event

Recording Critiques on 3D Audio for Students [10:00–12:00]

Day03 (August 9th)

Paper Presentation

P9: Signal/Spatial Processing [9:00–10:15]
P9-1 “Structured sparsity for sound field reproduction with overlapping groups: Investigation of the latent group lasso,” Philippe-Aubert Gauthier (Université de Sherbrooke), Pierre Grandjean (Université de Sherbrooke), and Alain Berry (Université de Sherbrooke)
P9-2 “VISR — A versatile open software framework for audio signal processing,” Andreas Franck (ISVR University of Southampton) and Filippo Fazi (ISVR University of Southampton)
P9-3 “Comparison of microphone distances for real-time reverberation enhancement system using optimal source distribution technique,” Atsushi Marui (Tokyo University of the Arts), Motoki Yairi (Kajima Technical Research Institute), and Toru Kamekawa (Tokyo University of the Arts)
P10: Production-1 [10:30–12:10]
P10-1 “Subjective and objective evaluation of 9ch three-dimensional acoustic music recording techniques,” Will Howie (McGill University), Denis Martin (McGill University), David H. Benson (McGill University), Jack Kelly (McGill University), and Richard King (McGill University)
P10-2 “An Approach for Object-Based Spatial Audio Mastering,” Simon Hestermann (Fraunhofer), Mario Seideneck (Fraunhofer), and Christoph Sladeczek (Fraunhofer)
P10-3 “Efficacy of using spatial averaging in improving the listening area of immersive audio monitoring in programme production,” Aki Mäkivirta (Genelec) and Thomas Lund (Genelec)
P10-4 “Eclectic Space — Dynamic Processing, Temporal Envelope and Sampling as Spatial Stagers in Contemporary Pop Music,” Anders Reuter (University of Copenhagen)
P11: Production-2, Audio System-2 [14:45–16:25]
P11-1 “Sound-Space Choreography,” Gerard Erruz (Eurecat) and Timothy Schmele (Eurecat)
P11-2 “Should sound and image be coherent during live performances?,” Etienne Hendrickx (University of Brest), Julian Palacino (Feichter Electronics), Vincent Koehl (University of Brest), Frédéric Changenet (Radio France), Etienne Corteel (L-Acoustics), and Mathieu Paquier (University of Brest)
P11-3 “A framework for intelligent metadata adaptation in object-based audio,” James Woodcock (University of Salford), Jon Francombe (BBC Research and Development), Andreas Franck (University of Southampton), Philip Coleman (University of Surrey), Richard Hughes (University of Salford), Hansung Kim (University of Surrey), Qingju Liu (University of Surrey), Dylan Menzies (University of Southampton), Marcos Simon Galvez (University of Southampton), Yan Tang (University of Salford), Tim Brookes (University of Surrey), William J. Davies (University of Salford), Bruno M. Fazenda (University of Salford), Russell Mason (University of Surrey), Trevor J. Cox (University of Salford), Filippo Maria Fazi (University of Southampton), Philip J. B. Jackson (University of Surrey), Chris Pike (BBC Research and Development), and Adrian Hilton (University of Surrey)
P11-4 “Dual frequency band amplitude panning for multichannel audio systems,” Richard Hughes (University of Salford), Andreas Franck (University of Southampton), Trevor Cox (University of Salford), Ben Shirley (University of Salford), and Filippo Maria Fazi (University of Southampton)
P12: Audio Systems [16:40–17:30]
P12-1 “Sign-Agnostic Matrix Design for Spatial Artificial Reverberation with Feedback Delay Networks,” Sebastian Schlecht (International Audio Laboratories, Erlangen) and Emanuel Habets (International Audio Laboratories, Erlangen)
P12-2 “Musical Emotions Evoked by 3D Audio,” Ephraim Hahn (McGill University)
Poster-3 (eB) [13:15–14:30]
EB2-1 “Individual binaural reproduction of music recordings using a virtual artificial head,” Mina Fallahi (Jade Hochschule Oldenburg), Matthias Blau (Jade Hochschule Oldenburg), Martin Hansen (Jade Hochschule Oldenburg), Simon Doclo (Carl von Ossietzky Universität Oldenburg), Steven van de Par (Carl von Ossietzky Universität Oldenburg), and Dirk Püschel (Akustik Technologie Göttingen)
EB2-2 “Creation of immersive-sound content by utilizing sound intensity measurement. Part 1, Generating a 4-pi reverberation,” Masataka Nakahara (ONFUTURE Ltd. / SONA Corporation), Akira Omoto (Kyushu University / ONFUTURE Ltd.), and Yasuhiko Nagatomo (Evixar Inc.)
EB2-3 “Creation of immersive-sound content by utilizing sound intensity measurement. Part 2, Obtaining a 3D panning table,” Takashi Mikami (SONA Corporation), Masataka Nakahara (ONFUTURE Ltd.), and Kazutaka Someya (beBlue Co., Ltd.)
EB2-4 “Creation of immersive-sound content by utilizing sound intensity measurement. Part 3, Object-based post-production work,” Ritsuko Tsuchikura (SONA Corporation), Akiho Matsuo (SONA Corporation), and Masumi Takino (beBlue Co., Ltd.)
EB2-5 “An immersive 3D audio-visual installation based on sound field rendering and reproduction with higher-order ambisonics,” Shoken Kaneko (Yamaha Corporation) and Hiraku Okumura (Yamaha Corporation)
EB2-6 “Reduction of Number of Inverse Filters for Boundary Surface Control,” Hiroshi Kashiwazaki (Kyushu University) and Akira Omoto (Kyushu University)
EB2-7 “A novel method for simulating movement in multichannel reverberation providing enhanced listener envelopment,” Joshua Jaworski (The University of Sydney) and William Martens (The University of Sydney)
EB2-8 “Investigation of practical compensation method for “so-called” bone conduction headphones with a focus on spatialization,” Akiko Fujise (Panasonic Corporation)
EB2-9 “Acoustic validation of a BEM-suitable 3D mesh of KEMAR,” Kat Young (University of York), Gavin Kearney (University of York), and Anthony I. Tew (University of York)
EB2-10 “Spatial impression of source widening effect for binaural audio production,” Hengwei Su (Tokyo University of the Arts), Atsushi Marui (Tokyo University of the Arts), and Toru Kamekawa (Tokyo University of the Arts)
EB2-11 “A method to measure the binaural response based on the reciprocity by using the dummy head with sound sources,” Wan-Ho Cho (Korea Research Institute of Standards and Science) and Ji-Ho Chang (Korea Research Institute of Standards and Science)
EB2-12 “Virtual Ensemble System with three-dimensional Sound Field Reproduction using ‘Sound Cask’,” Yuko Watanabe (Tokyo Denki University), Tomoya Fukuda (Tokyo Denki University), Shin Iwai (Tokyo Denki University), Rina Sasaki (Tokyo Denki University), and Shiro Ise (Tokyo Denki University)

Workshop

  • W8 [10:00–11:00]: “Strategies for Controlling and Composing for the Cube Spatial Audio Renderer”, Charles Nichols (Institute for Creativity, Arts, and Technology Virginia Tech)
  • W9 [11:15–12:15]: “The Present of Spatial Audio Expression in VR Games”, Atsushi OHTA (BANDAI NAMCO Studios Inc.)
  • W10 [14:45–16:15]: “Creating Sound in Virtual 3-D Space”, Tomoya Kishi, (CAPCOM Co.,Ltd.),

Tutorial

  • T4 [9:00–9:45]: “Kraftwerk and Booka Shade — The Challenge to Create Electro Pop Music in Immersive / 3D audio”, Tom Ammermann (New Audio Technology)
  • T5 [13:15–14:30]: “Acoustic enhancement system: Lessons on spatial hearing from concert hall designs”, Hideo Miyazaki (Yamaha Corporation), Takayuki Watanabe (Yamaha Corporation), Suyoung Lee (SoundKoreaENG Corporation, Seoul, Korea)

Student Event

Recording Critiques on 3D Audio for Students [10:00–12:00]

Panel Discussion [16:30–17:30]

“What are the keys for connecting Science and Aesthetics in the 3D Audio Reproduction?”

Closing Ceremony [17:40–18:10]

 

 

Workshops

W1: The Real World of Immersive Surround Production Techniques in JAPAN

Chair(s)
Mick Sawaguchi (Fellow Member AES, CEO UNAMAS-Label)
Presenter(s) / Panel(s)
Mick Sawaguchi (CEO UNAMAS-Label) and Hideo Irimajiri (Senior Expert of WOWOW, PhD)
Abstract
In this workshop, we will present a real world of Immersive Surround Production Practices by music album and programs over 9.1 ch. We will be in charge of 75 minutes workshop with slides and playback that make it useful for participants.
From Mick Sawaguchi 4 time award wining producer engineer, his UNAMAS Label has been doing immersive sound productions from 2014 on 9.1 ch to 11.1 ch classical music production at Ohga Hall Karuizawa.
He introduces this recording concept by art, technology, and engineering that show you immersive surround miking and various heights miking to optimized music style and delivered format.
From Hideo Irimajiri, he introduces various program productions from 22.2ch-9.1ch on UHD-TV. Also, he introduces his w-decca tree and various heights miking along with different event or venue.

W2: Techniques for recording and mixing pop, rock, and jazz music for 22.2 Multichannel Sound

Chair(s)
Will Howie, (McGill University)
Presenter(s) / Panel(s)
Will Howie, (McGill University)
Abstract
This workshop will present several newly developed techniques and concepts for music recording and mixing developed for 22.2 Multichannel Sound. These techniques can be easily scaled to 3D audio formats with a reduced number of playback channels, or adapted to object-based workflows. Complex multi-microphone arrays designed for capturing highly realistic direct sound images are combined with spaced ambience arrays to reproduce a complete sound scene. Challenges and strategies for mixing music for 3D reproduction from traditional stereo multitracks will also be discussed. Numerous corresponding 3D audio examples will be played.

W3: Microphone arrangement comparison of orchestral instrument for recording producer, balance engineer education

Presenter(s) / Panel(s)
Thorsten Weigelt (Berlin University of the Arts) and Kazuya Nagae (Nagoya University of Arts).
Abstract
Every musical instrument has a specific sound radiation pattern, which is highly dependent from tone and frequency. This has been extensively investigated by Jurgen Meyer in his book “Acoustics and the Performance of Music”. Due to this, the recorded sound by a microphone is highly dependent from the exact placement in relation to the instrument. This is a fact every sound engineer producer has to be aware of and knows. The recorded sound changes with the placement of the microphone quite drastically and we have to choose the “best” or (better) most appropriate mic placement for a specific recording situation.
We wanted to produce sound recordings for 15 orchestra instruments to make these differences hearable for interested people working and studying in the field of Recording Arts. We hope that those examples can supplement the worthwhile and irreplaceable book by Jurgen Meyer. We wanted to record those samples in a realistic environment and situation, means as stereo recordings in a concert hall. This AES conference's topic is Spatial Audio. But where and what kind of sound comes out from the instrument is most fundamental and important for music recording. We can know the balance between direct and indirect sounds, and musical and not musical sounds. We think, it will make to help for it. http://soundmedia.jp/nuaudk/

W4: Microphone Techniques for 3D sound recording

Chair(s)
Hyunkook Lee (University of Huddersfield) and Kimio Hamasaki (ARTSRIDGE LLC)
Presenter(s) / Panel(s)
Helmut Wittek (SCHOEPS Mikrofone GmbH), Will Howie (McGill University), Thorsten Weigelt (Berlin University of the Arts), Florian Camerer (ORF), Toru Kamekawa (Tokyo University of Arts), Kimio Hamasaki (ARTSRIDGE LLC), Hyunkook Lee (University of Huddersfield)
Abstract
Over the last few years there have been various microphone techniques proposed for 3D sound recording in an acoustic environment. Although they commonly tend to add extra height microphones to an existing horizontal surround microphone array, the polar pattern and angular orientation for the height microphones and the spacing between the lower and upper microphone layers are diverse. In a broad sense, there are mainly two schools of techniques used for 3D music recording: main arrays only with omni-directional microphones and those that employ omni or directional main microphones with directional height microphones. In order to provide a better understanding about the potential merits and limitations of different approaches, this workshop will invite leading recording engineers and researchers in the field of 3D audio for a panel discussion, with an aim to actively engage participants in the discussion. Each panel member will first share his or her own philosophy and technique for 3D sound capture with some potential demos before having the panel discussion. The workshop will cover some of the important quality aspects for 3D sound recording, such as localisation accuracy, spaciousness, timbral quality, musical intention, and the impression of ‘being there’, and discuss how different techniques can help achieve those qualities in recording.

W5: Upmix and downmix technique for 3D sound recording and reproduction

Chair(s)
Toru Kamekawa (Tokyo University of the Arts)
Presenter(s) / Panel(s)
TBD (Takehiro Sugimoto(NHK STRL) )
Abstract
Several formats that include height channels have been proposed, such as NHK's 22.2 channel system and the Auro 3D system which are included in ITU-R BS.2159 standardized in 2012. In regard to these playback systems labeled 3D Audio, upmix and downmix techniques are important to disseminate these playback system for public. In this workshop, we compere several upmix and downmix techniques and discuss how to make compatibility with simpler conventional playback methods.

W6: Object-Based Audio workflow for spatial broadcast productions - using Audio Definition Model for mastering

Chair(s)
Matthieu Parmentier (France TV)
Presenter(s) / Panel(s)
"Dominique Brulhart (Merging Technologies), Other broadcaster TBD, and IRCAM engineer TBD.
Abstract
This workshop will talk about broadcasters issues to deliver spatial audio productions on various distribution networks. Focusing on the Object-Based Audio workflow to maintain a good quality/cost ratio, the discussions and demos will highlight some new methodologies and tools recently engineered to create and monitor master files thanks to the Audio Definition Model, a free open format.

W7: Live spatial and Object-Based Audio production

Chair(s)
Matthieu Parmentier (France TV)
Presenter(s) / Panel(s)
TBD
Abstract
This EBU Audio Systems project team envisions to produce an experimental live production, using Object-Based Audio to feed different spatial audio formats in parallel, like Ultra High Definition TV and VR 360-degrees. The upcoming European Athletics Championship in Berlin (August 2018) is candidate to welcome this live trial. The talk will present the audio production workflow in detail, underlined by a live or near-live demo.

W8: Strategies for Controlling and Composing for the Cube Spatial Audio Renderer

Chair(s)
Charles Nichols
Presenter(s) / Panel(s)
Charles Nichols (Institute for Creativity, Arts, and Technology, Virginia Tech)
Abstract
In the Moss Arts Center at Virginia Tech, the Institute for Creativity Arts and Technology (ICAT) has designed and built the Cube a multimedia research lab and presentation venue, incorporating an 134.6 speaker immersive spatial-audio system, 9 directional narrow beam speakers, a 4 projector 360º surround video projection system with 3D capabilities, a 24 camera motion-capture system, and a tetherless virtual reality system with head mounted displays and backpack computers for up to 4 simultaneous users. As a faculty affiliate of ICAT, Assistant Professor of Composition and Creative Technologies Charles Nichols has helped research and design the audio system in the Cube, and has composed and performed several pieces utilizing the audio, video, and motion capture systems. ICAT Media Engineer Tanner Upthegrove has helped research and design all of the multimedia systems in the Cube, and has composed his own music for the spatial audio system. For the workshop, Charles Nichols will present a 30 minute lecture, detailing the immersive multimedia systems of the Cube at Virginia Tech, and ways that he and Tanner Upthegrove have controlled immersive spatial audio with commercial and custom software and hardware. After the lecture, Nichols will present twice a 30 minute performance of his compositions for electric violin and computer music, and fixed media by Upthegrove.

W9: The Present of Spatial Audio Expression in VR Games

Chair(s)
Atsushi OHTA (BANDAI NAMCO Studios Inc.)
Presenter(s) / Panel(s)
Atsushi OHTA (BANDAI NAMCO Studios Inc.)
Abstract
We will describe our current situations, challenges and future of spatial audio expression in video games at this workshop. We have continued to challenge to as high-realistic sound field as we've ever experienced through developing VR games and attractions, and then we have produced “Summer Lesson” and “VR ZONE.” This time we want to talk about barriers in development of video games, know-hows of these VR titles, and our thought of the future of game audio expression.

W10: Creating Sound in Virtual 3-D Space —A Comparison of 3-D Audio Production—

Chair(s)
Tomoya Kishi (CAPCOM Co., Ltd.)
Presenter(s) / Panel(s)
Tetsukazu Nakanishi (BANDAI NAMCO Studios Inc.), Hajime Saito (beBlue Co.,Ltd Aoyama Studio), Masataka Nakahara (SONA Corporation / ONFUTURE Ltd.), Joel Douek (ECCO VR)
Abstract
In this workshop, we'll tackle the “Aesthetics and Science” of game audio. Game sound designers use intuition and rules of thumb to design in-game audio, but is there an acoustical basis for their work? Our academic experts will unravel the mystery. We'll also examine post-production audio approaches and designs that rely heavily on intuition and rules of thumb. How do they compare in terms of production limitations and interactive experience? Since games are a product of programming, hardware processing power determines acoustic implementation, which is then refined by the artist. In turn, users experience the results through interaction, making games a unique type of media. “Aesthetics and Science” are sure to be of even greater importance to game audio in the coming future; it's time for us to take a closer look.

 

 

Tutorials

T1: Psychoacoustics of 3D sound recording and reproduction

Presenter(s) / Panel(s)
Hyunkook Lee (Applied Psychoacoustics Lab, University of Huddersfield)
Abstract
3D surround audio formats aim to produce an immersive soundfield in reproduction utilising elevated loudspeakers. In order to use the added height channels most optimally in sound recording and reproduction, it is necessary to understand the psychoacoustic principles of vertical stereo perception. This tutorial/demo session aims to provide a comprehensive overview of important psychoacoustic principles that recording engineers and spatial audio researchers need to consider when recording or rendering 3D sound. The topics will include real and phantom image localisation mechanisms in the vertical plane, vertical interchannel crosstalk, vertical interchannel decorrelation, phantom image elevation effect, perceptual equalisation for height enhancement and the practical applications of the research findings in 3D microphone array design. Various recording techniques for 3D sound capture and perceptual signal processing techniques to enhance the 3D image will also be introduced. This will be accompanied with demos of various 3D recordings, including the recent Dolby Atmos and Auro-3D blu-ray release of the Siglo de Oro choir.

T2: New Surround and Immersive Recordings

Presenter(s) / Panel(s)
Jim Anderson, Ulrike Schwarz, Kimio Hamasaki
Abstract
Multi-Grammy winner producer and engineer in surround productions Jim Anderson and Ulrike Schwarz (Anderson Audio NewYork) have spent the past year recording and mixing music in high resolution and in immersive formats from venues in New York to Norway to Havana. Their recordings have been made in various 3D recording formats and feature solo piano, big band, jazz trio and quartet, and orchestral performances. Mixing has taken place at Skywalker Sound and mastering has been by Bob Ludwig and Darcy Proper. Recordings will highlight performances by Jane Ira Bloom, Gonzalo Rubalcaba, the Jazz Ambassadors, and Norway's Stavanger Symphony Orchestra. Moderator Kimio Hamasaki will host an in-depth conversation with the two producers as they recount their experiences of recording in immersive formats.

T3: Ambisonic Recording and Mixing for Live Music Performance in 3D space

Presenter(s) / Panel(s)
Masato Ushijima (Sonologic-Design, Owner/Sound Designer Audiokinetic.K.K, Product Expert)
Abstract
Since VR/AR industry grows rapidly, Ambisonics has been attracted attention again as a main audio format to deliver 360-degree audio for 360-degree streaming video such as YouTube/Facebook. However, Ambisonics is demanding technology for creativity. This workshop will present how to utilize the beauty of sound field recording and mixing for 360-degree Music Video. Starting from re-cap of ambisonic basis, give you some idea of creative procedure for any kind of 360-degree video format.
This workshop will focus on productive and creative ambisonic usage for music, showing ambisonic recording and mixing pipeline by using materials of my ambisonic music video project. Few mathematical understanding may required to understand the creative idea of ambisonic technology. But workshop will go as less as technological term. Practical example will be showed such as ambisonic mic and equipments selection, preparation for ambisonic music recording, mixing tools and techniques.
Workshop will show as follows.
1)Understanding key elements of Ambisonics
2)Overviewing current Ambisonic usage in the production
3)Discussion of possibility of ambisonics for music.
4)Pipeline of Ambisonic recording for Music
5)Post processing and mixing for music in ambisonic format
6)Q&A
This workshop will be consolidated and improved version of my previous Ambisonics sessions at CEDEC2017 and InterBee2017 Sennheiser Session. Reference links are below. http://cedec.cesa.or.jp/2017/session/SND/s58de535f93c80/ https://www.youtube.com/playlist?list=PL2QD7ECBFJAGMO4IZKEEINs74fyNEM0BJ

T4: Kraftwerk and Booka Shade — The Challenge to Create Electro Pop Music in Immersive / 3D audio

Presenter(s) / Panel(s)
Tom Ammermann (New Audio Technology)
Abstract
Music has not a cinematic approach where spaceships flying around the listener. Nonetheless, music can become a fantastic spatial listening adventure in immersive / 3D. How this sounds will be shown with the new Grammy awarded Kraftwerk and Booka Shade Blu-ray releases this year. Production philosophies, strategies and workflows to create immersive / 3D in current workflows and DAWs will be shown and explained.

T5: Acoustic enhancement system: Lessons on spatial hearing from concert hall designs

Chair
Sungyoung Kim (Rochester Institute of Technology)
Presenter(s) / Panel(s)
Hideo Miyazaki and Takayuki Watanabe (Yamaha Corporation, Hamamatsu, Japan), Suyoung Lee (SoundKoreaENG Corporation, Seoul, Korea)
Abstract
This tutorial provides a comprehensive understanding of spatial hearing due to natural concert hall acoustics compared with a modern acoustic enhancement system. An acoustic enhancement system can alter the original or natural acoustic characteristics of a space using electro-acoustic devices generating a new immersive acoustic environment. Concert hall acoustics have a rich history and this tutorial provides important lessons with regard to spatial hearing. This tutorial will also discuss the connection of acoustics to the latest audio technology as an invaluable asset for today's researchers in the field of spatial sound capture and manipulation. The panelists will introduce the history of acoustic enhancement systems and discuss the latest developments of these systems related to spatial impression and sound quality optimization. The second part of tutorial will take place in a concert hall to demonstrate the manipulation of spatial attributes and their impact on musicians and listeners.
AES - Audio Engineering Society