The Journal of the Audio Engineering Society — the official publication of the AES — is the only peer-reviewed journal devoted exclusively to audio technology. Published 10 times each year, it is available to all AES members and subscribers.
The Journal contains state-of-the-art review papers, technical papers, and engineering reports; standards committee work, convention and conference announcements, membership news, and book reviews.
Authors: Font, Frederic; Stolfi, Ariane; Schroeder, Franziska
Authors:Baratè, Adriano; Ludovico, Luca A.
Affiliation:Laboratory of Music Informatics Department of Computer Science University of Milan Via G. Celoria, 18 - 20133 Milan, Italy
The Web MIDI API is intended to connect a browser app with Musical Instrument Digital Interface (MIDI) devices and make them interact. Such an interface deals with exchanging MIDI messages between a browser app and an external MIDI system, either physical or virtual. The standardization by the World Wide Web (W3C) Consortium started about 10 years ago, with a first public draft published on October 2012, and the process is not over yet. Because this technology can pave the way for innovative applications in musical and extra-musical fields, the present paper aims to unveil the main features of the API, remarking its advantages and drawbacks and discussing several applications that could take benefit from its adoption.
Download: PDF (HIGH Res) (1.0MB)
Download: PDF (LOW Res) (297KB)
Authors:Sacchetto, Matteo; Gastaldi, Paolo; Chafe, Chris; Rottondi, Cristina; Servetti, Antonio
Affiliation:Department of Electronics and Telecommunications, Politecnico di Torino, Italy; Department of Energy, Politecnico di Torino, Italy; Center for Computer Research in Music and Acoustic, Stanford University, CA; Department of Electronics and Telecommunications, Politecnico di Torino, Italy; Department of Control and Computer Engineering, Politecnico di Torino, Italy
Nowadays, widely used videoconferencing software has been diffused even further by the social distancing measures adopted during the SARS-CoV-2 pandemic. However, none of the Web-based solutions currently available support high-fidelity stereo audio streaming, which is a fundamental prerequisite for networked music applications. This is mainly because of the fact that the WebRTC RTCPeerConnection standard or Web-based audio streaming do not handle uncompressed audio formats. To overcome that limitation, an implementation of 16-bit pulse code modulation (PCM) stereo audio transmission on top of the WebRTC RTCDataChannel, leveraging Web Audio and AudioWorklets, is discussed. Results obtained with multiple configurations, browsers, and operating systems showthat the proposed approach outperforms theWebRTC RTCPeerConnection standard in terms of audio quality and latency, which in the authors' best case to date has been reduced to only 40 ms between twoMacBooks on a local area network.
Download: PDF (HIGH Res) (3.9MB)
Download: PDF (LOW Res) (845KB)
Authors:Ren, Shihong; Pottier, Laurent; Buffa, Michel; Yu, Yang
Affiliation:Shanghai Conservatory of Music, SKLMA, China; Universit´e Jean Monnet, ECLLA Lab, Saint-Etienne, France; Universit´e Jean Monnet, ECLLA Lab, Saint-Etienne, France; Universit´e Cˆote d’Azur, I3S, INRIA, France; Shanghai Conservatory of Music, SKLMA, China
Download: PDF (HIGH Res) (12.1MB)
Download: PDF (LOW Res) (1.2MB)
Authors:Lindetorp, Hans; Falkenberg, Kjetil
Affiliation:Department of Music Production, Royal College of Music, Stockholm, Sweden; Sound and Music Computing Group, KTH Royal Institute of Technology, Stockholm, Sweden; Sound and Music Computing Group, KTH Royal Institute of Technology, Stockholm, Sweden
Web Audio has a great potential for interactive audio content in which an open standard and easy integration with other web-based tools makes it particularly interesting. From earlier studies, obstacles for students to materialize creative ideas through programming were identified; focus shifted from artistic ambition to solving technical issues. This study builds upon 20 years of experience from teaching sound and music computing and evaluates howWeb Audio contributes to the learning experience. Data was collected from different student projects through analysis of source code, reflective texts, group discussions, and online self-evaluation forms. The result indicates that Web Audio serves well as a learning platform and that an XML abstraction of the API helped the students to stay focused on the artistic output. It is also concluded that an online tool can reduce the time for getting started with Web Audio to less than 1 h. Although many obstacles have been successfully removed, the authors argue that there is still a great potential for new online tools targeting audio application development in which the accessibility and sharing features contribute to an even better learning experience.
Download: PDF (HIGH Res) (4.4MB)
Download: PDF (LOW Res) (364KB)
Authors:Fyfe, Lawrence; Bedoya, Daniel; Chew, Elaine
Affiliation:STMS Laboratoire (UMR9912) – CNRS, IRCAM, Sorbonne Universit´e, Minist`ere de la Culture, Paris 75004, France; STMS Laboratoire (UMR9912) – CNRS, IRCAM, Sorbonne Universit´e, Minist`ere de la Culture, Paris 75004, France; Department of Engineering, King’s College London, London WC2R 2LS, United Kingdom
Advancing knowledge and understanding about performed music is hampered by a lack of annotation data for music expressivity. To enable large-scale collection of annotations and explorations of performed music, the authors have created a workflow that is enabled by CosmoNote, aWeb-based citizen science tool for annotating musical structures created by the performer and experienced by the listener during expressive piano performances. To enable annotation tasks with CosmoNote, annotators can listen to the recorded performances and view synchronized music visualization layers including the audio waveform, recorded notes, extracted audio features such as loudness and tempo, and score features such as harmonic tension. Annotators have the ability to zoom into specific parts of a performance and see visuals and listen to the audio from just that part. The annotation of performed musical structures is done by using boundaries of varying strengths, regions, comments, and note groups. By analyzing the annotations collected with CosmoNote, performance decisions will be able to be modeled and analyzed in order to aid in the understanding of expressive choices in musical performances and discover the vocabulary of performed musical structures.
Download: PDF (HIGH Res) (10.7MB)
Download: PDF (LOW Res) (1.5MB)
Authors:Cámara, Mateo; Blanco, José Luis
Affiliation:Grupo de Aplicaciones del Procesado de Señales, Information Processing and Telecommunication Center; Escuela Técnica Superior de Ingenieros de Telecomunicación, Universidad Politécnica de Madrid, Spain
Download: PDF (HIGH Res) (2.5MB)
Download: PDF (LOW Res) (440KB)