In augmented-reality (AR) applications, reproducing acoustic reverberation is essential for the immersive audio experience. The audio components of an AR system should simulate the acoustics of the environment that is experienced by the users. Earlier, in virtual–reality (VR) applications, sound engineers could program all of the reverberation parameters for a particular scene in advance or when the user is at a fixed position. However, adjusting the reverberation parameters using conventional procedures is difficult because the unlimited range of such parameters cannot be programmed for AR applications. Therefore, it is necessary to dynamically estimate the reverberation characteristics based on the environments in which the users move. Considering that skilled acoustic engineers can estimate the reverberation parameters using the images of a room without performing any measurements, we trained convolutional neural networks to estimate the reverberation parameters using two–dimensional images. The proposed method does not require the simulations of sound propagation using 3D reconstruction techniques.
Click to purchase paper as a non-member or login as an AES member. If your company or school subscribes to the E-Library then switch to the institutional version. If you are not an AES member and would like to subscribe to the E-Library then Join the AES!
This paper costs $33 for non-members and is free for AES members and E-Library subscribers.