AES Conventions and Conferences

Return to 117th
Convention Program
       (in PDF)
Registration
Travel & Hotels
Exhibition
Technical Program
   Detailed Calendar
       & 4 Day Planner
   Paper Sessions
   Workshops
   Tutorials
   Special Events
   Technical Tours
   Exhibitor Seminars
   Historical Events
   Heyser Lecture
   Tech Comm Mtgs
   Standards Mtgs
   Facilities Requests
Student Program
Press Information

v7.3, 20041009, me

Sunday, October 31, 9:00 am – 10:30 am
Session O AUDIO RECORDING AND REPRODUCTION

Chair: Sunil Bharitkar
, Audyssey Labs., Inc., Los Angeles, CA, USA

9:00 am
O-1
Specifying the Jitter Performance of Audio ComponentsChris Travis, Sonopsis Ltd., Wotton-under-Edge, Gloucestershire, UK; Paul Lesso, Wolfson Microelectronics, Edinburgh, UK
The question of sample-clock quality is a perennial one for digital audio equipment designers. Yet most chip makers provide very little information about the jitter performance of their products. Consequently, equipment designers sometimes get burnt by jitter issues. The increasing use of packet-based communications and class-D amplification will throw these matters into sharp relief. This paper reviews various ways of characterizing and quantifying jitter, and refines several of them for audio purposes. It also attempts to present a common, unambiguous terminology. The focus includes wideband jitter, baseband jitter, jitter spectra, period jitter, long-term jitter, and jitter signatures. Comments are made on jitter transfer through phase-locked loops and on the jitter susceptibility of audio converters.
Convention Paper 6293

9:30 am
O-2
High Performance Discrete Building Blocks for Balanced Audio Signal ProcessingBruno Putzeys, Grimm Audio, Eindhoven, The Netherlands
To audio systems designers, the “fully differential op amp” is a relatively new entry. Two discrete-circuit variations on the theme are presented, one of which provides effectively floating outputs.
Convention Paper 6294

10:00 am
O-3
Partial Unmixing for Personalized AudioMark Dolson, Creative Advanced Technology Center, Scotts Valley, CA, USA
Immersive audio for interactive gaming is necessarily processed and mixed in real time as it is being rendered on the game audio playback platform. It is generally assumed that music and movie soundtracks require no comparable processing during playback because listeners typically provide no real-time input that might affect the final rendering. In reality, prepackaged audio is being delivered to music and movie playback platforms in increasingly diverse forms. The result is that mismatches between the spatial audio format, bit depth, and frequency range of the content and that of the playback system pose an emerging problem for which sophisticated playback processing may be an appropriate response. This paper presents a formal statement of the mismatch problem and proposes a unified solution using frequency-domain processing to perform “partial unmixing” of the prepackaged content. Last, we show how this can enable a new music/movie listening experience rooted in the concept of “personalized audio.”
Convention Paper 6295

Back to AES 117th Convention Back to AES Home Page


(C) 2004, Audio Engineering Society, Inc.