Events

2016 Sound Field Control Conference Keynotes

Home | Authors | Programme | Keynotes | Demonstrations | Venue | Committee | Sponsorship | Accommodation | Registration

 

Keynote speakers

Prof Philip Nelson

Sound field control: A brief history

Nelson

Biography:
Philip Nelson holds the post of Professor of Acoustics in the Institute of Sound and Vibration Research at the University of Southampton. He has personal research interests in the fields of acoustics, vibrations, signal processing, control systems and fluid dynamics and is the author or co-author of 2 books, over 120 papers in refereed journals, 30 granted patents, and over 200 other technical publications. He served from 2005-2013 as Pro Vice-Chancellor of the University of Southampton, with particular responsibility for Research and Enterprise. He previously served as Director of the Institute of Sound and Vibration Research and as Director of the Rolls-Royce University Technology Centre in Gas Turbine Noise. Professor Nelson is a Fellow of the Royal Academy of Engineering, a Chartered Engineer, a Fellow of the Institution of Mechanical Engineers, a Fellow of the Institute of Acoustics, and a Fellow of the Acoustical Society of America. He is the recipient of both the Tyndall and Rayleigh Medals of the Institute of Acoustics, and was President of the International Commission for Acoustics from 2004-2007. He also served as the Chair of the Sub-Panel for General Engineering in REF 2014. From 2014 he has been Chief Executive of the Engineering and Physical Sciences Research Council and took on the role of Chair of the RCUK Executive Group from October 2015.

Abstract:
This presentation will deal with aspects of controlling sound fields through “active” (electro-acoustical) intervention with the objectives of either reducing unwanted sound or enhancing wanted sound. The lecture will begin with a review of some examples of work of researchers in the latter part of the nineteenth century and early part of the twentieth century which still have relevance today. An incomplete review of the literature on the subject finds many examples, especially during the twentieth century, of researchers who foresaw potential applications of active methods for the reduction of unwanted noise, but whose efforts were constrained by the available electronic technology. Similarly, although significant practical advances were made in the development of effective methods for the reproduction of sound, it was not until the 1980s that the ready commercial availability of digital signal processors stimulated many new avenues of exploration. The paper will attempt to illuminate the connections between the active control of sound and contemporary approaches to sound reproduction within the consistent framework provided by multichannel digital signal processing and the physical behaviour of linearly superposed sound fields. 

 

Dr Gary Elko

Differential Microphone Arrays

Elko

Biography:
Gary Elko has been a leading research engineer in the field of acoustic signal processing related to telecommunication. In 1984 he began his professional career at Bell Labs as a member of the research staff in the renowned Acoustics Research Department. While at Bell Labs, he worked with a small group of researchers on signal processing algorithms related to the field of hands-free speakerphone communication. His work on adaptive superdirectional beamforming microphone arrays has led to the widespread use of this technology in hearing aids around the world. His publications on the general theory of differential microphone arrays are often viewed as a seminal contribution to the field of acoustic signal processing. Since 2002 he has been president of mh acoustics LLC, a small company specializing in developing and licensing intellectual property to customers in the fields of professional, consumer, and public safety audio. With his colleague Jens Meyer, he co-invented the Eigenmike® spherical microphone array that decomposes the sound field into a compact set of orthogonal spherical harmonic signals which is now gaining commercial interest in the field of immersive audio.

Abstract:
Acoustic noise and reverberation can significantly degrade both the microphone reception and the loudspeaker transmission of speech and other desired acoustic signals. Directional loudspeakers and microphone arrays have proven to be effective in combatting these problems. This talk will cover the design and implementation of a specific class of beamforming microphone arrays that are designed to respond to spatial differentials of the acoustic pressure field in order to attain a desired directional response. These types of beamformers are therefore referred to as differential microphone arrays. By definition, differential arrays are small compared to the acoustic wavelength over their frequency range of operation. Aside from the desirable small size of the differential array, another beneficial feature is that its directional response (beampattern) is generally independent of frequency.  Differential arrays are superdirectional arrays since their directivity is typically much higher than that of a uniform delay-and-summed array having the same geometry. It is well known that superdirectional arrays are subject to implementation robustness issues such as microphone amplitude and phase mismatch, sensitivity to microphone self-noise, and input circuit noise. Thus, the design of practical differential microphone arrays can be cast as an optimization problem for a desired beampattern response with constraints on the amount of SNR loss through the beamformer. Spherical differential arrays have been of interest for the spatial recording of sound fields for over 40 years. These arrays were initially first-order systems but higher-order spherical arrays are becoming available and are an active area of research for spatial sound pickup and playback.  Several microphone array geometries that use robustness design constraints covering multiple-order differential arrays will be shown. 

 

Prof Steven van de Par

The Psychoacoustics of Reverberation

van de Par

Biography:
Steven van de Par studied physics at the Eindhoven University of Technology, Eindhoven, The Netherlands, and received the Ph.D. degree in 1998 from the Eindhoven University of Technology, on a topic related to binaural hearing. As a Postdoctoral Researcher at the Eindhoven University of Technology, he studied auditory-visual interaction and was a Guest Researcher at the University of Connecticut Health Center. In early 2000, he joined Philips Research, Eindhoven, to do applied research in auditory and multisensory perception, low-bit-rate audio coding and music information retrieval. Since April 2010 he holds a professor position in acoustics at the University of Oldenburg, Germany with a research focus on the fundamentals of auditory perception and its application to virtual acoustics, vehicle acoustics, and digital signal processing. He has published various papers on binaural auditory perception, auditory-visual synchrony perception, audio coding and computational auditory scene analysis.

Abstract:
On a physical level, reverberation has a significant influence on the acoustic signals that we receive at our eardrums. For our auditory perception, the influence of reverberation often seems to be relatively small. In this presentation some important psycho-acoustical principles will be presented by which the auditory system can reduce the detrimental effect of reverberation. These insights will be discussed in the context of modelling speech intelligibility in reverberant environments. An example will be given of how such models of speech intelligibility can be used in sound control to reduce the detrimental effect of reverberation on understanding speech. Finally, also some perceptual consequences of reverberation in the rendering of none-speech audio will be discussed.

 

 
AES - Audio Engineering Society