AES London 2011
Poster Session P13

P13 - Production and Broadcast


Saturday, May 14, 16:30 — 18:00 (Room: Foyer)

P13-1 A Comparison of Kanun Recording Techniques as They Relate to Turkish Makam Music PerceptionCan Karadogan, Istanbul Technical University - Istanbul, Turkey
This paper presents a quality comparison of microphone techniques applied on the kanun, a prominent traditional instrument of Turkish Makam music. Disregarding the effects of pre-amplifier color, A/D converter, compression, equalization, mixing, and mastering, only the studio recording step of music production is taken into focus. Microphone techniques were applied with varying placements and microphone types, and doing so, original Turkish Makam music etudes were recorded. Using short excerpts of these etudes, a survey comparing microphone techniques and placements as well as microphone types was prepared. Subjects were chosen from kanun players, sound engineers, and non-musicians who showed different perspectives, preferences, and descriptions to the sound samples of various microphone techniques.
Convention Paper 8393 (Purchase now)

P13-2 Objective Measurement of Produced Music Quality Using Inter-Band Dynamic Relationship AnalysisSteven Fenton, University of Huddersfield - Huddersfield, UK; Bruno Fazenda, University of Salford - Salford, UK; Jonathan Wakefield, University of Huddersfield - Huddersfield, UK
This paper describes and evaluates an objective measurement that grades the quality of a complex musical signal. The authors have previously identified a potential correlation between inter-band dynamics and the subjective quality of produced music excerpts. This paper describes the previously presented Inter-Band Relationship (IBR) descriptor and extends this work by testing with real-world music excerpts and a greater number of listening subjects. A high degree of correlation is observed between the Mean Subject Scores (MSS) and the objective IBR descriptor suggesting it could be used as an additional model output variable (MOV) to describe produced music quality. The method lends itself to real-time implementation and therefore can be exploited within mixing, mastering, and monitoring tools.
Convention Paper 8394 (Purchase now)

P13-3 Evaluation of a New Algorithm for Automatic Hum Detection in Audio RecordingsMatthias Brandt, Jade University of Applied Sciences - Oldenburg, Germany; Thorsten Schmidt, Cube-Tec International - Bremen, Germany; Joerg Bitzer, Jade University of Applied Sciences - Oldenburg, Germany
In this paper an evaluation of a recently published hum detection algorithm for audio signals is presented. To determine the performance of the method, large amounts of artificially generated and real-world audio data, containing a variety of music and speech recordings, are processed by the algorithm. By comparing the detection results with manually determined ground truth data, several error measures are computed: hit and false alarm rates, frequency deviation of the hum frequency estimation, offset of detected start and stop times, and the accuracy of the SNR estimation.
Convention Paper 8395 (Purchase now)

P13-4 Interactive Mixing Using Wii ControllerRod Selfridge, Joshua Reiss, Queen Mary University of London - London, UK
This paper describes the design, construction, and analysis of an interactive gesture-controlled audio mixing system by the means of a wireless video game controller. The concept is based on the idea that the mixing engineer can step away from the mixing desk and become part of the performance of the piece of audio. The system allows full, live control of gains, stereo panning, equalization, dynamic range compression, and a variety of other effects for multichannel audio. The system and its implementation are described in detail. Subjective evaluation and listening tests were performed to assess usability and performance of the system, and the test procedure and results are reported.
Convention Paper 8396 (Purchase now)

P13-5 The Quintessence of a Waveform: Focus and Context for Audio Track DisplaysJörn Loviscach, Fachhochschule Bielefeld (University of Applied Sciences) - Bielefeld, Germany
Oscilloscope-style waveform plots offer great insight into the properties of the audio signal. However, their use is impeded by the huge spread of timescales extending from fractions of a millisecond to several hours. Hence, waveform plots often require zooming in and out. This paper introduces a graphical representation through a synthesized quintessential waveform that looks shows the spectrum of the traditional waveform plot but does so at a much larger timescale. The quintessential waveform can reveal details about single periods at zoom levels where a regular waveform plot only indicates the signal's envelope. Compression renders the enormous ranges of frequencies and amplitudes more legible.
Convention Paper 8397 (Purchase now)

P13-6 Spatial Audio Processing for Interactive TV ServicesJohann-Markus Batke, Jens Spille, Holger Kropp, Stefan Abeling, Technicolor, Research, and Innovation - Hannover Germany; Ben Shirley, Rob G. Oldfield, University of Salford - Salford, UK
FascinatE is a European funded project that aims at developing a system to allow end users to interactively navigate around a video panorama showing a live event, with the accompanying audio automatically changing to match the selected view. The audiovisual content will be adapted to the users particular kind of device, covering anything from a mobile handset equipped with headphones to an imersive panoramic display connected with large loudspeaker setup. We describe how to handle audio content in the FascinatE context, covering simple stereo through to spatial sound fields. This will be performed by a mixture of Higher Order Ambisonics and Wave Field Synthesis. Our paper focuses on the greatest challenges for both techniques when capturing, transmitting, and rendering the audio scene.
Convention Paper 8398 (Purchase now)

P13-7 Wireless High Definition Multichannel Streaming Audio Network Technology Based on the IEEE 802.11 StandardsSeppo Nikkila, Tom Lindeman, Valentin Manea, ANT – Advanced Network Technologies Oy - Helsinki, Finland
A novel approach for the wireless distribution of uncompressed real-time multichannel streaming audio is presented. The technology is based on the IEEE 802.11 Point Coordination Function, Contention Free Medium Access Control with size-optimized multicast frames. The implementation supports eight independent audio streams with simultaneous 24-bit audio samples at the rate of 192 kHz. Frame length alignment algorithm is developed for smooth, low jitter flow. An audio specific Forward Error Correction scheme and a low system latency inter-channel synchronization method are described. The clock drift is solved by a sample stuffing/stripping algorithm. Implemented hardware and software structures are presented and the technology is compared with other wireless audio networking concepts. Emerging multichannel content formats are briefly reviewed.
Convention Paper 8399 (Purchase now)

P13-8 Gestures to Operate DAW SoftwareWincent Balin, Universität Oldenburg - Oldenburg, Germany; Jörn Loviscach, Fachhochschule Bielefeld (University of Applied Sciences) - Bielefeld, Germany
There is a noticeable absence of gestures—be they mouse-based or (multi-)touch-based—in mainstream digital audio workstation (DAW) software. As an example for such a gesture consider a clockwise O drawn with the fi nger to increase a value of a parameter. The increasing availability of devices such as smartphones, tablet computers, and touchscreen displays raises the question in how far audio software can benefi t from gestures. We describe design strategies to create a consistent set of gesture commands. The main part of this paper reports on a user survey on mappings between 22 DAW functions and 30 single-point as well as multi-point gestures. We discuss the findings and point out consequences for user-interface design.
Convention Paper 8456 (Purchase now)


Return to Paper Sessions