Listening tests are regarded as the “gold standard” in evaluating the perceptual quality of audio systems. With the surge of applications in virtual and augmented reality, the demand for audio quality evaluations that are more efficient than listening tests has greatly increased. Auditory models are an attractive tool for this purpose, and can greatly complement listening tests. A machine-learning-based model for predicting timbral, spatial, and overall audio quality is presented. When both timbral and spatial attributes are considered, existing models (e.g., MoBi-Q ) often assume minimum interaction between the two attributes, and combine their respective quality predictions into a single overall quality judgement. To validate such an assumption, a listening test with various timbral and spatial distortions was conducted. Results revealed a strong correlation between the two quality attributes when moderate distortion is present. Based on this observation, the proposed model preserves the original front-end of MoBi-Q for feature extraction and uses a simple neural network as the decision module that independently maps auditory features to timbral, spatial, and overall quality scores with no explicit assumptions. Using available third-party datasets, our proposed model showed a significantly higher correlation with subjective scores than MoBi-Q for timbral and overall quality. The assessment of spatial audio quality is still ongoing.
Click to purchase paper as a non-member or login as an AES member. If your company or school subscribes to the E-Library then switch to the institutional version. If you are not an AES member and would like to subscribe to the E-Library then Join the AES!
This paper costs $33 for non-members and is free for AES members and E-Library subscribers.