This paper investigates an approach for synthesizing sounds of rigid body interactions using linear modal synthesis (LMS). We propose a technique based on feature extraction from one recorded audio clip to estimate perceptually satisfactory material parameters of virtual objects for real-time sound rendering. In this study, the significant features from one recorded audio are extracted by computing high level power spectrogram that is based on short time Fourier transform analysis with optimal windowed function. Based on these reference features, the material parameters of intrinsic quality are computed for interactive virtual objects in graphical environments. A tetrahedralize finite element method (FEM) is employed to achieve Eigen values decomposition during modal analysis process. Residual compensation is also implemented to optimize the perceptual differences between the synthesized and the real sounds, and to include the non-harmonic components in the synthesized audio in order to achieve perceptually high quality sound. Furthermore, the computed parameters for material objects of one geometry can be transferred to different geometries and shapes of the same material objects, whereas, the synthesized sound varies as the shapes of the objects change. The results of the estimated parameters as well as a comparison of real sound and the synthesized sound are presented. The potential applications of our methodology are synthesis of real time contact sound events for games and interactive virtual graphical animations and providing extended authoring capabilities.
Click to purchase paper as a non-member or login as an AES member. If your company or school subscribes to the E-Library then switch to the institutional version. If you are not an AES member and would like to subscribe to the E-Library then Join the AES!
This paper costs $33 for non-members and is free for AES members and E-Library subscribers.