Automating Mixing of User-Generated Audio Recordings from the Same Event
×
Cite This
Citation & Abstract
N. Stefanakis, Y. Mastorakis, A. Alexandridis, and A. Mouchtaris, "Automating Mixing of User-Generated Audio Recordings from the Same Event," J. Audio Eng. Soc., vol. 67, no. 4, pp. 201-212, (2019 April.). doi: https://doi.org/10.17743/jaes.2019.0008
N. Stefanakis, Y. Mastorakis, A. Alexandridis, and A. Mouchtaris, "Automating Mixing of User-Generated Audio Recordings from the Same Event," J. Audio Eng. Soc., vol. 67 Issue 4 pp. 201-212, (2019 April.). doi: https://doi.org/10.17743/jaes.2019.0008
Abstract: When users attend the same public event, there may be multiple audiovisual recordings that are then posted on social media and websites. The availability of such a massive amount of user-generated recordings (UGR) has triggered new research directions related to the search, organization, and management of this content. And it has provided inspiration for new business models for content storage, retrieval, and consumption. The authors propose an approach to combine the available recordings based on a normalization step and a mixing step. The normalization step defines a fixed-with-time gain that is specific to each UGR. In the mixing step, a mechanism that reduces the master gain in accordance with the number of activated inputs at each time is employed. An approach called orthogonal mixing is presented, which is based on the assumption that the mixture components are mutually independent. The presented mixing process allows the combination of multiple short-duration UGRs to produce a longer audio stream, with potentially better quality than any one of its constituent parts. This property is exploited in the design of an automatic mixing process that exploits all the available audio recordings at each moment. Automatic mixing is then possible.
@article{stefanakis2019automating,
author={stefanakis, nikolaos and mastorakis, yannis and alexandridis, anastasios and mouchtaris, athanasios},
journal={journal of the audio engineering society},
title={automating mixing of user-generated audio recordings from the same event},
year={2019},
volume={67},
number={4},
pages={201-212},
doi={https://doi.org/10.17743/jaes.2019.0008},
month={april},}
@article{stefanakis2019automating,
author={stefanakis, nikolaos and mastorakis, yannis and alexandridis, anastasios and mouchtaris, athanasios},
journal={journal of the audio engineering society},
title={automating mixing of user-generated audio recordings from the same event},
year={2019},
volume={67},
number={4},
pages={201-212},
doi={https://doi.org/10.17743/jaes.2019.0008},
month={april},
abstract={when users attend the same public event, there may be multiple audiovisual recordings that are then posted on social media and websites. the availability of such a massive amount of user-generated recordings (ugr) has triggered new research directions related to the search, organization, and management of this content. and it has provided inspiration for new business models for content storage, retrieval, and consumption. the authors propose an approach to combine the available recordings based on a normalization step and a mixing step. the normalization step defines a fixed-with-time gain that is specific to each ugr. in the mixing step, a mechanism that reduces the master gain in accordance with the number of activated inputs at each time is employed. an approach called orthogonal mixing is presented, which is based on the assumption that the mixture components are mutually independent. the presented mixing process allows the combination of multiple short-duration ugrs to produce a longer audio stream, with potentially better quality than any one of its constituent parts. this property is exploited in the design of an automatic mixing process that exploits all the available audio recordings at each moment. automatic mixing is then possible.},}
TY - paper
TI - Automating Mixing of User-Generated Audio Recordings from the Same Event
SP - 201
EP - 212
AU - Stefanakis, Nikolaos
AU - Mastorakis, Yannis
AU - Alexandridis, Anastasios
AU - Mouchtaris, Athanasios
PY - 2019
JO - Journal of the Audio Engineering Society
IS - 4
VO - 67
VL - 67
Y1 - April 2019
TY - paper
TI - Automating Mixing of User-Generated Audio Recordings from the Same Event
SP - 201
EP - 212
AU - Stefanakis, Nikolaos
AU - Mastorakis, Yannis
AU - Alexandridis, Anastasios
AU - Mouchtaris, Athanasios
PY - 2019
JO - Journal of the Audio Engineering Society
IS - 4
VO - 67
VL - 67
Y1 - April 2019
AB - When users attend the same public event, there may be multiple audiovisual recordings that are then posted on social media and websites. The availability of such a massive amount of user-generated recordings (UGR) has triggered new research directions related to the search, organization, and management of this content. And it has provided inspiration for new business models for content storage, retrieval, and consumption. The authors propose an approach to combine the available recordings based on a normalization step and a mixing step. The normalization step defines a fixed-with-time gain that is specific to each UGR. In the mixing step, a mechanism that reduces the master gain in accordance with the number of activated inputs at each time is employed. An approach called orthogonal mixing is presented, which is based on the assumption that the mixture components are mutually independent. The presented mixing process allows the combination of multiple short-duration UGRs to produce a longer audio stream, with potentially better quality than any one of its constituent parts. This property is exploited in the design of an automatic mixing process that exploits all the available audio recordings at each moment. Automatic mixing is then possible.
When users attend the same public event, there may be multiple audiovisual recordings that are then posted on social media and websites. The availability of such a massive amount of user-generated recordings (UGR) has triggered new research directions related to the search, organization, and management of this content. And it has provided inspiration for new business models for content storage, retrieval, and consumption. The authors propose an approach to combine the available recordings based on a normalization step and a mixing step. The normalization step defines a fixed-with-time gain that is specific to each UGR. In the mixing step, a mechanism that reduces the master gain in accordance with the number of activated inputs at each time is employed. An approach called orthogonal mixing is presented, which is based on the assumption that the mixture components are mutually independent. The presented mixing process allows the combination of multiple short-duration UGRs to produce a longer audio stream, with potentially better quality than any one of its constituent parts. This property is exploited in the design of an automatic mixing process that exploits all the available audio recordings at each moment. Automatic mixing is then possible.
Authors:
Stefanakis, Nikolaos; Mastorakis, Yannis; Alexandridis, Anastasios; Mouchtaris, Athanasios
Affiliations:
Foundation for Research and Technology-Hellas, Institute of Computer Science, Heraklion, Greece; Technological Educational Institute of Crete, Department of Music Technology and Acoustics Engineering, Rethymno, Greece; University of Crete, Department of Computer Science, Heraklion, Greece(See document for exact affiliation information.) JAES Volume 67 Issue 4 pp. 201-212; April 2019
Publication Date:
April 5, 2019Import into BibTeX
Permalink:
http://www.aes.org/e-lib/browse.cfm?elib=20452