AES Vienna 2007
Home Visitors Exhibitors Press Students Authors
Visitors
Technical Program
Paper Sessions

Spotlight on Broadcasting

Spotlight on Live Sound

Spotlight on Archiving

Detailed Calendar

Convention Planner

Paper Sessions

Workshops

Tutorials

Exhibitor Seminars

Application Seminars

Special Events

Student Program

Technical Tours

Technical Council

Standards Committee

Heyser Lecture









Last Updated: 20070320, mei

P27 - Analysis and Synthesis of Sound

Tuesday, May 8, 15:30 — 17:00

P27-1 Time Signature Detection by Using a Multiresolution Audio Similarity MatrixMikel Gainza, Eugene Coyle, Dublin Institute of Technology - Dublin, Ireland
A method that estimates the time signature of a piece of music is presented. The approach exploits the repetitive structure of most of the music, where the same musical bar is repeated in different parts of a piece. The method utilizes a multiresolution audio similarity matrix approach, which allows comparisons between longer audio segments (bars) by combining comparisons of shorter segments (fraction of a note). The time signature method only depends on musical structure, and does not depend on the presence of percussive instruments or strong musical accents.
Convention Paper 7154 (Purchase now)

P27-2 Signal Processing Parameters for Tonality EstimationKaty Noland, Mark Sandler, Queen Mary, University of London - London, UK
All musical audio feature extraction techniques require some form of signal processing as a first step. However, the choice of low level parameters such as window sizes is often disregarded, and arbitrary values are chosen. We present an investigation into the effects of low level parameter choice on different tonality estimation algorithms and show that the low level parameters can make a significant difference to the results. We also show that the choice of parameters is algorithm specific, so optimization is required for each different method.
Convention Paper 7155 (Purchase now)

P27-3 Audio Effects for Real-Time Performance Using Beat TrackingA. M. Stark, M. D. Plumbley, M. E. P. Davies, Queen Mary, University of London - London, UK
We present a new class of digital audio effects that can automatically relate parameter values to the tempo of a musical input in real-time. Using a beat tracking system as the front end, we demonstrate a tempo-dependent delay effect and a set of beat-synchronous low frequency oscillator (LFO) effects including auto-wah, tremolo, and vibrato. The effects show better performance than might be expected as they are blind to certain beat tracker errors. All effects are implemented as VST-plug-ins that operate in real-time, enabling their use both in live musical performance and the off-line modification of studio recordings.
Convention Paper 7156 (Purchase now)

P27-4 JAVA Library for Automatic Musical Instruments RecognitionPiotr Aniola, Ewa Lukasik, Poznan University of Technology - Poznan, Poland
The paper presents an open source Java library intended for analysis and classification of musical instrument sounds. It consists of two main parts: one devoted for feature extraction and the second performing musical instruments recognition and similarity assessment. The project’s plug-in based structure enables further extendibility of both modules. In the current version two separate sound modeling algorithms have been implemented: k-means and Gaussian Mixture Models. The software has been created for the purpose of recognition of different exemplars of the same type of instruments and validated for electric guitars, guitar-amplifiers, and violins. The Java project follows the latest trends in software engineering. It enables the developer to easily create highly usable, reliable, and extendable programs. The entire software discussed here is open source.
Convention Paper 7157 (Purchase now)

P27-5 Extraction of Long-Term Rhythmic Structures Using the Empirical Mode DecompositionPeyman Heydarian, Joshua D. Reiss, Queen Mary, University of London - London, UK
Long-term musical structures provide information concerning rhythm, melody, and the composition. Although highly musically relevant, these structures are difficult to determine using standard signal processing techniques. In this paper a new technique based on the time-domain empirical mode decomposition is explained. It decomposes a given signal into its constituent oscillations that can be modified to produce a new version of the signal. It enables us to analyze the long-term metrical structures in musical signals and provides insight into perceived rhythms and their relationship to the signal. The technique is explained, and results are reported and discussed.
Convention Paper 7158 (Purchase now)