W. Fohl, and D. Hemmer, "An FPGA-Based Virtual Reality Audio System," Paper 9328, (2015 May.). doi:
W. Fohl, and D. Hemmer, "An FPGA-Based Virtual Reality Audio System," Paper 9328, (2015 May.). doi:
Abstract: A distributed system for mobile virtual reality audio is presented. The system consists of an audio server running on a PC or Mac, a remote control app for an iOS6 device, and the mobile renderer running on a system-on-chip (SoC) with a CPU core and signal processing hardware. The server communicates with the renderer via WLAN. It sends audio streams over a self-defined lightweight protocol and exchanges status and control data as OSC (Open Sound Control) messages. On the mobile renderer, HRTF filters are applied to each audio signal according to the relative positions of the source and the listener’s head. The complete audio signal processing chain has been designed in Simulink. The VHDL code for the SoC’s FPGA hardware has been automatically generated by Xilinx’s System Generator. The system is capable of rendering up to eight independent virtual sources.
@article{fohl2015an,
author={fohl, wolfgang and hemmer, david},
journal={journal of the audio engineering society},
title={an fpga-based virtual reality audio system},
year={2015},
volume={},
number={},
pages={},
doi={},
month={may},}
@article{fohl2015an,
author={fohl, wolfgang and hemmer, david},
journal={journal of the audio engineering society},
title={an fpga-based virtual reality audio system},
year={2015},
volume={},
number={},
pages={},
doi={},
month={may},
abstract={a distributed system for mobile virtual reality audio is presented. the system consists of an audio server running on a pc or mac, a remote control app for an ios6 device, and the mobile renderer running on a system-on-chip (soc) with a cpu core and signal processing hardware. the server communicates with the renderer via wlan. it sends audio streams over a self-defined lightweight protocol and exchanges status and control data as osc (open sound control) messages. on the mobile renderer, hrtf filters are applied to each audio signal according to the relative positions of the source and the listener’s head. the complete audio signal processing chain has been designed in simulink. the vhdl code for the soc’s fpga hardware has been automatically generated by xilinx’s system generator. the system is capable of rendering up to eight independent virtual sources.},}
TY - paper
TI - An FPGA-Based Virtual Reality Audio System
SP -
EP -
AU - Fohl, Wolfgang
AU - Hemmer, David
PY - 2015
JO - Journal of the Audio Engineering Society
IS -
VO -
VL -
Y1 - May 2015
TY - paper
TI - An FPGA-Based Virtual Reality Audio System
SP -
EP -
AU - Fohl, Wolfgang
AU - Hemmer, David
PY - 2015
JO - Journal of the Audio Engineering Society
IS -
VO -
VL -
Y1 - May 2015
AB - A distributed system for mobile virtual reality audio is presented. The system consists of an audio server running on a PC or Mac, a remote control app for an iOS6 device, and the mobile renderer running on a system-on-chip (SoC) with a CPU core and signal processing hardware. The server communicates with the renderer via WLAN. It sends audio streams over a self-defined lightweight protocol and exchanges status and control data as OSC (Open Sound Control) messages. On the mobile renderer, HRTF filters are applied to each audio signal according to the relative positions of the source and the listener’s head. The complete audio signal processing chain has been designed in Simulink. The VHDL code for the SoC’s FPGA hardware has been automatically generated by Xilinx’s System Generator. The system is capable of rendering up to eight independent virtual sources.
A distributed system for mobile virtual reality audio is presented. The system consists of an audio server running on a PC or Mac, a remote control app for an iOS6 device, and the mobile renderer running on a system-on-chip (SoC) with a CPU core and signal processing hardware. The server communicates with the renderer via WLAN. It sends audio streams over a self-defined lightweight protocol and exchanges status and control data as OSC (Open Sound Control) messages. On the mobile renderer, HRTF filters are applied to each audio signal according to the relative positions of the source and the listener’s head. The complete audio signal processing chain has been designed in Simulink. The VHDL code for the SoC’s FPGA hardware has been automatically generated by Xilinx’s System Generator. The system is capable of rendering up to eight independent virtual sources.
Authors:
Fohl, Wolfgang; Hemmer, David
Affiliation:
Hamburg University of Applied Sciences, Hamburg, Germany
AES Convention:
138 (May 2015)
Paper Number:
9328
Publication Date:
May 6, 2015Import into BibTeX
Subject:
Audio Signal Processing
Permalink:
http://www.aes.org/e-lib/browse.cfm?elib=17752