AES 123rd Convention - Where Audio Comes Alive
Home Visitors Exhibitors Press Students Authors
registration
detailed calendar
paper sessions
workshops
broadcast sessions
tutorials
master classes
live sound seminars
symposium
students/career
historical
special events
technical tours
exhibitor seminars
training sessions
standards
technical committees
awards ceremony

AES New York 2007
Poster Session P17

P17 - Signal Processing Applied to Music

Sunday, October 7, 2:30 pm — 4:00 pm
P17-1 Toward Textual Annotation of Rhythmic Style in Electronic Dance MusicKurt Jacobson, Matthew Davies, Mark Sandler, Queen Mary University of London - London, UK
Music information retrieval encompasses a complex and diverse set of problems. Some recent work has focused on automatic textual annotation of audio data, paralleling work in image retrieval. Here we take a narrower approach to the automatic textual annotation of music signals and focus on rhythmic style. Training data for rhythmic styles are derived from simple, precisely labeled drum loops intended for content creation. These loops are already textually annotated with the rhythmic style they represent. The training loops are then compared against a database of music content to apply textual annotations of rhythmic style to unheard music signals. Three distinct methods of rhythmic analysis are explored. These methods are tested on a small collection of electronic dance music resulting in a labeling accuracy of 73 percent.
Convention Paper 7268 (Purchase now)

P17-2 Key-Independent Classification of Harmonic Change in Musical AudioErnest Li, Juan Pablo Bello, New York University - New York, NY, USA
We introduce a novel method for describing the harmonic development of a musical signal by using only low-level audio features. Our approach uses Euclidean and phase distances in a tonal centroid space. Both measurements are taken between successive chroma partitions of a harmonically segmented signal, for each of three harmonic circles representing fifths, major thirds, and minor thirds. The resulting feature vector can be used to quantify a string of successive chord changes according to changes in chord quality and movement of the chordal root. We demonstrate that our feature set can provide both unique classification and accurate identification of harmonic changes, while resisting variations in orchestration and key.
Convention Paper 7269 (Purchase now)

P17-3 Automatic Bar Line SegmentationMikel Gainza, Dan Barry, Eugene Coyle, Dublin Institute of Technology - Dublin, Ireland
A method that segments the audio according to the position of the bar lines is presented. The method detects musical bars that frequently repeat in different parts of a musical piece by using an audio similarity matrix. The position of each bar line is predicted by using prior information about the position of previous bar lines as well as the estimated bar length. The bar line segmentation method does not depend on the presence of percussive instruments to calculate the bar length. In addition, the alignment of the bars allows moderate tempo deviations
Convention Paper 7270 (Purchase now)

P17-4 The Analysis and Determination of the Tuning System in Audio Musical SignalsPeyman Heydarian, Lewis Jones, Allan Seago, London Metropolitan University - London, UK
The tuning system is an essential aspect of a musical piece. It specifies the scale intervals and contributes to the emotions of a song. There is a direct relationship between the musical mode and the tuning of a piece for modal musical traditions. In a broader sense it represents the different genres. In this paper algorithms based on spectral and chroma averages are developed to construct patterns from audio musical files. Then a similarity measure like the Manhattan distance or the cross-correlation determines the similarity of a piece to each tuning class. The tuning system provides valuable information about a piece and is worth incorporating into the metadata of a musical file.
Convention Paper 7271 (Purchase now)


Last Updated: 20070823, mei


.