AES New York 2007
P4 - Signal Processing, Part 2
Paper Session P4
Friday, October 5, 1:30 pm — 4:30 pm
Chair: Alan Seefeldt, Dolby Laboratories - San Francisco, CA, USA
P4-1 Loudness Domain Signal Processing—Alan Seefeldt, Dolby Laboratories - San Francisco, CA, USA
Loudness Domain Signal Processing (LDSP) is a new framework within which many useful audio processing tasks may be achieved with high quality results. The LDSP framework presented here involves first transforming audio into a perceptual representation utilizing a psychoacoustic model of loudness perception. This model maps the nonlinear variation in loudness perception with signal frequency and level into a domain where loudness perception across frequency and time is represented on a uniform scale. As such, this domain is ideal for performing various loudness modification tasks such as volume control, automatic leveling, etc. These modifications may be performed in a modular and sequential manner, and the resulting modified perceptual representation is then inverted through the psychoacoustic loudness model to produce the final processed audio.
Convention Paper 7180 (Purchase now)
P4-2 Design of a Flexible Crossfade/Level Controller Algorithm for Portable Media Platforms—Danny Jochelson, Texas Instruments, Inc. - Dallas, TX, USA; Stephen Fedigan, General Dynamics SATCOM - Richardson, TX, USA; Jason Kridner, Jeff Hayes, Texas Instruments, Inc. - Stafford, TX, USA
The addition of a growing number of multimedia capabilities on mobile devices necessitate rendering multiple streams simultaneously, fueling the need for intelligent mixing of these streams to achieve proper balance and address the tradeoff between dynamic range and saturation. Additionally, the crossfading of subsequent streams can greatly enhance the user experience on portable media devices. This paper describes the architecture, features, and design challenges for a real-time, intelligent mixer with crossfade capabilities for portable audio platforms. This algorithm shows promise in addressing many audio system challenges on portable devices through a highly flexible and configurable design while maintaining low processing requirements.
Convention Paper 7181 (Purchase now)
P4-3 Audio Delivery Specification—Thomas Lund, TC Electronic A/S - Risskov, Denmark
From the quasi-peak meter in broadcast to sample by sample assessment in music production, normalization of digital audio has traditionally been based on a peak level measure. The paper demonstrates how low dynamic range material under such conditions generally comes out the loudest and how the recent ITU-R BS.1770 standard offers a coherent alternative to peak level fixation. Taking the ITU-R recommendations into account, novel ways of visualizing short-term loudness and loudness history are presented; and applications for compatible statistical descriptors portraying an entire
music track or broadcast program are discussed.
Convention Paper 7182 (Purchase now)
P4-4 Multi-Core Signal Processing Architecture for Audio Applications—Brent Karley, Sergio Liberman, Simon Gallimore, Freescale Semiconductor, Inc. - Austin, TX, USA
As already seen in the embedded computing industry and other consumer markets, the trend in audio signal processing architectures is toward multi-core designs. This trend is expected to continue given the need to support higher performance applications that are becoming more prevalent in both the consumer and professional audio industries. This paper describes a multi-core audio architectures being promoted to the audio industry and details the various architectural hardware, software, and system level trade-offs. The proper application of multi-core architectures is addressed for both consumer and professional audio applications and a comparison of single core, multi-core, and multi-chip designs is provided based on the authors’ experience in the design, development, and application of signal processors.
Convention Paper 7183 (Purchase now)
P4-5 Rapid Prototyping and Implementing Audio Algorithms on DSPs Using Model-Based Design and Automatic Code Generation—Arvind Ananthan, The MathWorks - Natick, MA, USA
This paper explores the increasingly popular model-based design concept to design audio algorithms within a graphical design environment, Simulink, and automatically generate processor specific code to implement it on target DSP in a short time without any manual coding. The final fixed-point processors targeted in this paper will be Analog Devices Blackfin processor and Texas Instruments C6416 DSP. The concept of model-based design introduced here will be explained primarily using an acoustic noise cancellation system (using an LMS algorithm) as an example. However, the same approach can be applied to other audio and signal processing algorithms; other examples that will be shown during the lecture will include a 3-Band a parametric equalizer, reverberation model, flanging, voice pitch shifting, and other audio effects. The design process starting from a floating point model to easily converting it to a fixed-point model is clearly demonstrated in this paper. The model is then implemented on C6416 DSK board and Blackfin 537 EZ-Kit board using the automatically generated code. Finally, the paper also explains how to profile the generated code and optimize it using C-intrinsics (C-callable assembly libraries).
Convention Paper 7184 (Purchase now)
P4-6 Filter Reconstruction and Program Material Characteristics Mitigating Word Length Loss in Digital Signal Processing-Based Compensation Curves Used for Playback of Analog Recordings—Robert S. Robinson, Channel D Corporation - Trenton, NJ, USA
Renewed consumer interest in pre-digital recordings, such as vinyl records, has spurred efforts to implement playback emphasis compensation in the digital domain. This facilitates realizing tighter design objectives with less effort than required with practical analog circuitry. A common assumption regarding a drawback to this approach, namely bass resolution loss (word length truncation) of up to approximately seven bits during digital de-emphasis of recorded program material, ignores the reconstructive properties of compensation filtering and the characteristics of typical program material. An analysis of the problem is presented, as well as examples showing a typical resolution loss of zero to one bits. The worst case resolution loss, which is unlikely to be encountered with music, is approximately three bits.
Convention Paper 7185 (Purchase now)
Last Updated: 20070831, mei