AES Warsaw 2015
Poster Session P17

P17 - (Poster) Recording and Production


Sunday, May 10, 13:00 — 15:00 (Foyer)

P17-1 Tom–Tom Drumheads Miking AnalysisAndrés Felipe Quiroga, Universidad de San Buenaventura - Bogotá, Colombia; Juan David Garcia, Universidad de San Buenaventura - Bogota, Colombia; Dario Páez, Universidad de San Buenaventura - Bogotá, Colombia
Different drum recording techniques have been developed through time, from stereo to close miking techniques. This is relevant, since the techniques and the characteristics of the instrument will define its sound within the final mix. A study was designed that gives experienced and non-experienced recording engineers tools and specific characteristics of tom–tom close miking techniques with different drumheads, microphones, and capture positions. Results indicate the behaviors of the different drumheads and capture positions with the different microphones. The first frequency band of resonance (attack) shows the highest decay level compared to the second band of resonance (tone), and the edge position presented the lower decay level on the second band of resonance, showing its resonant behavior on the envelope.
Convention Paper 9338 (Purchase now)

P17-2 Concept of Film Sound Restoration by Adapting to Contemporary Cinema TheatreJoanna Napieralska, Frederic Chopin University of Music - Warsaw, Poland
This paper presents an individual approach to restoration of Polish film sound based on the author’s own works. It answers the following question: under what conditions, and by the use of which techniques, may the restoration of archive film sound provide the viewer with a cleaner reproduction of the original sound while maintaining the standard expected by a modern cinema-going audience. At its basic level, the sound restoration routine comprises the following: transfer from the magnetic tape, syncing, cleaning of low/high frequency noises, repair of material impairments and reprinting effect, and mastering for broadcasting, cinema, DVD/Blu-ray, and internet formats. The reconstruction discussed modifies the sound quality and sometimes the contents. However, it can be performed only under certain legal restrictions.
Convention Paper 9339 (Purchase now)

P17-3 Deep Sound Design: Procedural Implementations Based on General Audiovisual Production Pipeline IntegrationJosé Roberto Cabezas Hernández, Universidad Nacional Autónoma de México - Mexico City, Mexico
This work is an exploration for the integration of data available on the visual post-production pipeline for the development of procedural sound design and composition techniques, by implementing different methods to allow access of different file formats for scene and shot reconstruction. The main purpose in an audiovisual creation context is to investigate stronger and inmost image-sound cognitive perceptions and relationships generated by data usage and analysis; also, reducing automation time by directly linking data to parameters for developing a creative editing, mixing, design, and compositional workflow based on a shot by shot manipulation.
Convention Paper 9340 (Purchase now)

P17-4 An Investigation into the Efficacy of Methods Commonly Employed by Mix Engineers to Reduce Frequency Masking in the Mixing of Multitrack Musical RecordingsJonathan Wakefield, University of Huddersfield - Huddersfield, UK; Christopher Dewey, University of Huddersfield - Huddersfield, UK
Studio engineers use a variety of techniques to reduce frequency masking between instruments when mixing multitrack musical recordings. This study evaluates the efficacy of three techniques, namely mirrored equalization, frequency spectrum sharing, and stereo panning against their variations to confirm the veracity of accepted practice. Mirrored equalization involves boosting one instrument and cutting the other at the same frequency. Frequency spectrum sharing involves low pass filtering one instrument and high pass filtering the other. Panning involves placing two competing instruments at different pan positions. Test subjects used eight tools comprising a single unlabeled slider to reduce frequency masking in several two instrument scenarios. Satisfaction values were recorded. Results indicate subjects preferred using tools that panned both audio tracks.
Convention Paper 9341 (Purchase now)

P17-5 An Interactive Multimedia Experience: A Case StudyAndrew J. Horsburgh, Southampton Solent University - Southampton, UK
Accurate representation of three dimensional spaces, both real and virtual, within an environment is a matter of concern for researchers and content producers in the media industry; it is expected that truly immersive experiences will become more desirable outside of research labs and bespoke facilities. This paper presents a case study examining the implementation between visual and audible elements to form a singular experience of immersion, AIME, at Solent University. The computer-based system uses a time-code generator that allows for seamless integration between audio workstations, visual playback, and external lighting. The prototype system uses 2nd order ambisonic audio reproduction, three large panel displays for vision, and an external lighting rig running from time code.
Convention Paper 9342 (Purchase now)

P17-6 Evaluation of an Algorithm for the Automatic Detection of Salient Frequencies in Individual Tracks of Multitrack Musical RecordingsJonathan Wakefield, University of Huddersfield - Huddersfield, UK; Christopher Dewey, University of Huddersfield - Huddersfield, UK
This paper evaluates the performance of a salient frequency detection algorithm. The algorithm calculates each FFT bin maximum as the maximum value of that bin across an audio region and identifies the FFT bin maximum peaks with the highest five deemed to be the most salient frequencies. To determine the algorithm’s efficacy test subjects were asked to identify the salient frequencies in eighteen audio tracks. These results were compared against the algorithm’s results. The algorithm was successful with electric guitars but struggled with other instruments and in detecting secondary salient frequencies. In a second experiment subjects equalised the same audio tracks using the detected peaks as fixed centre frequencies. Subjects were more satisfied than expected when using these frequencies.
Convention Paper 9343 (Purchase now)

P17-7 The Sonic Vernacular: Considering Communicative Timbral Gestures in Modern Music ProductionLeah Kardos, Kingston University London - Kingston Upon Thames, Surrey, UK
Over the course of audio recording history, we have seen the activity of sound recording widen in scope “from a technical matter to a conceptual and artistic one” (Moorefield 2010) and the producer’s role evolving from technician to “auteur.” For recording practitioners engaged in artistic and commercial industry and discourse, fluency in contemporary and historic sound languages is advantageous This paper seeks to find the best, most practically useful method to describe these characteristics in practice, identify a clear and suitable way to talk about and analyze these uses of communicative timbral gestures, as heard in modern music productions.”
Convention Paper 9344 (Purchase now)

P17-8 Auto Panning In-Ear Monitors for Live PerformersTom Webb, Southampton Solent University - Southampton, UK; Andrew J. Horsburgh, Southampton Solent University - Southampton, UK
In a live musical performance, accurate stage monitoring is a vital element to achieve the optimal performance. Current stage monitoring uses traditional musician-facing loudspeakers. Problems can be surmised as excessive Sound Pressure Level (SPL), the inability to hear themselves, acoustic feedback, and general stage untidiness/space requirements. In-ear monitors (IEM’s) can offer a solution to these problems when the IEM system has been properly designed [7]. One crucial issue with IEMs is the sense of isolation and disconnection from stage noise and crowd. To overcome this issue, an auto-panning system that adjusts spatial placement of audio channels within the performers stage mix has been designed and built.
Convention Paper 9345 (Purchase now)

P17-9 An Investigation into Plausibility in the Mixing of Foley Sounds in Film and TelevisionBraham Hughes, University of Huddersfield - Huddersfield, UK; Jonathan Wakefield, University of Huddersfield - Huddersfield, UK
This paper describes an experiment that tested the plausibility of a selection of post-production audio mixes of Foley for a short film. The mixes differed in the implementation of four primary audio mixing parameters: panning, level, equalization, and the control of reverberation effects. The experiments presented test subjects with mixes in which one of the four primary parameters was altered while the rest remained at levels deemed to conform to an “industry standard” reference mix that had been verified by an expert industry practitioner. Results show that there is a statistically significant affect on plausibility of using even slight dynamic variation of pan, level, and equalization control to enhance the perception of realism of Foley that move in a scene.
Convention Paper 9346 (Purchase now)

P17-10 A Semantically Motivated Gestural Interface for the Control of a Dynamic Range CompressorThomas Wilson, University of Huddersfield - Huddersfield, West Yorkshire, UK; Steven Fenton, University of Huddersfield - Huddersfield, West Yorkshire, UK; Matthew Stephenson, University of Huddersfield - Huddersfield, West Yorkshire, UK
This paper presents a simplified 2D gesture based approach to modifying dynamics within a musical signal. Despite the growth in gesture-controlled audio seen over recent years, it has primarily been limited to the upper workflow/navigation level. This has been compounded by the Skeuomorphic design approaches of graphical user interfaces (GUI). This design approach, although representative of the original piece of audio equipment, often lowers workflow and hinders the simultaneous control of parameters. Following a large scale gesture elicitation exercise utilizing a common 2D touch pad and analysis of semantic audio control parameters, a set of reduced multi-modal parameters are proposed that offers both workflow efficiency and a much simplified method of control for dynamic range compression.
Convention Paper 9347 (Purchase now)

P17-11 Natural Sound Recording of an Orchestra with Three-Dimensional SoundKimio Hamasaki, ARTSRIDGE LLC - Chiba, Japan; Wilfried Van Baelen, Auro Technologies N.V. - Mol, Belgium
This paper introduces the microphone techniques for recording an orchestra with three-dimensional multichannel sound and discusses the spatial impression provided by the recorded sound of an orchestra. Listeners in a concert hall simultaneously hear both a direct sound arriving from each musical instrument and an indirect sound reflected from the walls and the ceiling. Concerning a direct sound, existing microphone techniques can be used for three-dimensional multichannel sound with necessary modification, but new microphone techniques should be developed for an indirect sound. This paper will propose the microphone technique consisting of a main microphone array and an ambience microphone array, which will enable us to control spatial impressions easily and realize the stable sound source localization.
Convention Paper 9348 (Purchase now)


Return to Paper Sessions

EXHIBITION HOURS May 7th   10:00 – 18:00 May 8th   09:00 – 18:00 May 9th   09:00 – 18:00
REGISTRATION DESK May 6th   15:00 – 18:00 May 7th   09:30 – 18:30 May 8th   08:30 – 18:30 May 9th   08:30 – 18:30 May 10th   08:30 – 16:30
TECHNICAL PROGRAM May 7th   10:00 – 18:00 May 8th   09:00 – 18:00 May 9th   09:00 – 18:00 May 10th   09:00 – 17:00
AES - Audio Engineering Society