AES London 2011
Paper Session P7
P7 - Live and Interactive Sound
Saturday, May 14, 09:00 — 11:00 (Room 1)
P7-1 User Driven, Local Model, Reclassification of Drum Loop Audio Slices—Henry Lindsay-Smith, Queen Mary University of London - London, UK; Skot McDonald, FXpansion Audio Ltd. - London, UK; Mark Sandler, Queen Mary University of London - London, UK
We present a method for significantly improving the results of drum loop slice classification. An onset detector is used to slice loops of percussion only audio. Low level features are extracted from the audio slices and the slices are classified into one of seven percussion classes by a previously trained PART decision table. This general classification algorithm shows only an adequate performance. The user is then allowed to correct incorrect classifications. Each corrected classification is combined with a subset of the original classifications and a nearest neighbor algorithm reclassifies the remaining slices according to the corrected local model. The resultant algorithm converges on a 100% correct solution, with nearly 40% fewer re-classifications than a non-assisted approach.
Convention Paper 8356 (Purchase now)
P7-2 Kick-Drum Signal Acquisition, Isolation and Reinforcement Optimization in Live Sound—Adam J. Hill, Malcolm O. J. Hawksford, University of Essex - Colchester, Essex, UK; Adam P. Rosenthal, Gary Gand, Gand Concert Sound - Glenview, IL, USA
A critical requirement for popular music in live-sound applications is the achievement of a robust kick-drum sound presented to the audience and the drummer while simultaneously achieving a workable degree of acoustic isolation for other on-stage musicians. Routinely a transparent wall is placed in parallel to the kick-drum heads to attenuate sound from the drummer’s monitor loudspeakers, although this can cause sound quality impairment from comb-filter interference. Practical optimization techniques are explored, embracing microphone selection and placement (including multiple microphones in combination), isolation-wall location, drum-monitor electronic delay, and echo cancellation. A system analysis is presented augmented by real-world measurements and relevant simulations using a bespoke Finite-Difference Time-Domain (FDTD) algorithm.
Convention Paper 8357 (Purchase now)
P7-3 Development of a Virtual Performance Studio with Application of Virtual Acoustic Recording Methods—Iain Iaird, Glasgow School of Art - Glasgow, Scotland, UK; Damian Murphy, University of York - Heslington, York, UK; Paul Chapman, Glasgow School of Art - Glasgow, Scotland, UK; Seb Jouan, Arup - Glasgow, Scotland, UK
A Virtual Performance Studio (VPS) is a space that allows a musician to practice in a virtual version of a real performance space in order to acclimatize to the acoustic feedback received on stage before physically performing there. Traditional auralization techniques allow this by convolving the direct sound from the instrument with the appropriate impulse response on stage. In order to capture only the direct sound from the instrument, a directional microphone is often used at small distances from the instrument. This can give rise to noticeable tonal distortion due to proximity effect and spatial sampling of the instrument’s directivity function. This work reports on the construction of a prototype VPS system and goes on to demonstrate how an auralization can be significantly affected by the placement of the microphone around the instrument, contributing to a reported “PA effect.” Informal listening tests have suggested that there is a general preference for auralizations that process multiple microphones placed around the instrument.
Convention Paper 8358 (Purchase now)
P7-4 Interactive Audio Realities: An Augmented / Mixed Reality Audio Game Prototype—Nikos Moustakas, Andreas Floros, Nicolas Grigoriou, Ionian University - Corfu, Greece
Audio-games represent a game alternative based on audible feedback rather than on visual. They may benefit from parametric sound synthesis and advanced audio technologies (i.e., augmented reality audio), in order to effectively realize complex scenarios. In this paper a multiplayer game prototype is introduced that employs the concept of controlled mixed reality in order to augment the sound environment of each player. The prototype is realized as multiple user audiovisual installations, which are interconnected in order to communicate the status of the selected control parameters in real-time. The prototype reveals significant relevance to the-well known on-line multiplayer games, with its novelty originating from the fact that user interaction is realized in augmented reality audio environments.
Convention Paper 8359 (Purchase now)