Back to meeting recaps

Back to AES Chicago home


Audio Engineering Society

Chicago Section

Meeting Recap - November 26, 2013


AES Chicago Section Meeting Recap
Meeting Date: November 26, 2013
Written By: Ken Platz

Topic: Microphone Array Usage in Mobile Device Multimedia Applications

Presented by: Plamen Ivanov


The November 26, 2013, meeting of the Chicago AES Section was held at Shure Incorporated, located in Niles, Illinois. Just over thirty members and non-members attended Plamen Ivanov’s presentation in the S.N. Shure Theater.

Plamen started his presentation by giving a brief history of when multiple microphones were added to cell phones: 2004 – 2006 first introduced phones with two independent microphones; 2007 – 2009 provided phones with two-microphone null steering (NS) algorithms; and 2010 offered the first three microphone cellphone (from Motorola).

Two different approaches to null steering were then presented: the delay & sum beam former and the differential array. The delay & sum beam former introduces a delay (via a digital filter) to achieve coherent summation. Steering is achieved by adjusting the delays in each sensor path by selecting filter coefficients out of a given filter bank. The differential array (da) provides higher directivity (over the delay & sum) with the same number of sensors giving a constant directivity with frequency but its output is not flat so it requires equalization. The da is known as an end-fire array whereas the delay & sum is known as a broadside array. Adding a time delay element to a differential array allows for effective null steering. Various directivity patterns can be obtained using two omni microphones such as cardioid, super-cadioid, bi-directional, hyper-cardioid, etc. Multi-microphone ‘hybrid’ arrays process microphone pairs’ signals in their optimal sub-bands. This enables a phone to have various audio attributes with modes such as ‘subject’, ‘balanced’, ‘concert’, or ‘mic-zoom’.

Implementing multiple microphones in a new phone requires attention to multiple disciplines and attributes. The designer needs to consider the user-interface, susceptibility to wind noise and handling noise, and occlusion (when the user places their hands or fingers over the mic openings). Plamen also described how an audio engineer faces various implementation challenges. System interactions include mechanical & electrical requirements, algorithm & software requirements, and system integration.

Questions from the audience included asking about user testing (robotic and human tests are conducted early and often by a development team), restricted bandwidth for user settings (the various algorithms provide multiple options for the user to choose from and the options are not just implemented by band limiting the response), expectation that a user knows how to use the phone in the various modes (menus and icons are constantly being designed, reviewed, and updated to assist the user), and if proximity effect is considered an issue (no, it is not a design issue since most use cases for recording would not require to place the phone close to the talker).

The Chicago AES Section would like to extend a special thanks to Plamen Ivanov for presenting to our section and for including some rather intense math to show us that providing audio options is not as simple as today’s cellphone user would anticipate.



ABOUT THIS MONTH’S SPEAKER:

Plamen Ivanov was born in Vratsa, Bulgaria. He studied Acoustics and Audio Signal Processing in the Wroclaw Polytechnic in Poland, and graduated with a Master’s degree in Electrical Engineering in 1996, with a final-thesis work on text to speech synthesis. He was awarded a European Union “Tempus Scholar Program” grant for a year-long visit at Sheffield Hallam University (England), where he worked in the area of room acoustics and automated signal identification.
In 1996 he joined the graduate program in the Department of Electrical and Computer Engineering at Iowa State University and conducted research in various aspects of non-destructive material evaluation; including finite element modeling, defect detection and characterization, test and measurement system development. He specialized in communications and signal processing and defended a doctorate in 2002.
In 2003 he consulted as a Research Engineer in the area of HSDPA multi-antenna base-band signal processing. Since 2004 he has been with Motorola, contributing to technology development and transfer in product, for various voice and multimedia signal processing components. His current interests and work include surround sound multimedia technology, array processing, and signal conditioning for robust speech recognition.