New high capacity optical discs and high bandwidth networks provide the capability for delivering multichannel audio. Although there are many one-and two-channel recordings in existence, only a handful of multichannel recordings exist. In this paper we propose a neural network approach that can synthesize microphone signals with the correct acoustical characteristics of specific venues that have been characterized in advance. These signals can be used to generate a multichannel recording with the acoustical characteristics of the original venue. The complex semi-cepstrum technique is employed to extract features from musical signals recorded in a venue and these signals are sent into the fuzzy cerebellar model articulation controller (FCMAC) for training.
https://www.aes.org/e-lib/browse.cfm?elib=11258
Click to purchase paper as a non-member or login as an AES member. If your company or school subscribes to the E-Library then switch to the institutional version. If you are not an AES member and would like to subscribe to the E-Library then Join the AES!
This paper costs $33 for non-members and is free for AES members and E-Library subscribers.
Learn more about the AES E-Library
Start a discussion about this paper!