AES Conventions and Conferences

Return to 117th
Convention Program
       (in PDF)
Registration
Travel & Hotels
Exhibition
Technical Program
   Detailed Calendar
       & 4 Day Planner
   Paper Sessions
   Workshops
   Tutorials
   Special Events
   Technical Tours
   Exhibitor Seminars
   Historical Events
   Heyser Lecture
   Tech Comm Mtgs
   Standards Mtgs
   Facilities Requests
Student Program
Press Information

v7.0, 20040922, me

Sunday, October 31, 1:30 pm – 3:30 pm
Session S AUDIO-VIDEO SYSTEMS

Chair: Steve Lyman,
Dolby Laboratories, San Francisco, CA, USA

1:30 pm
S-1
Sound Editing Workflows and Technologies for Digital Film—The Nonlinear SoundtrackJohn McKay, Virtual Katy, Wellington, New Zealand
Advancements in nonlinear editing technology have enabled directors to modify their film project at any point during the post process. This freedom provides significant creative flexibility. However, the technologies for sound and film editing are not fully integrated and pose a challenge for sound editors keeping sync with film edits and changes. This paper introduces new workflows and technologies that enable sound editors to work in tandem with the changing film and automate manual processes in a collaborative non-linear environment. These new workflows and changing technologies will be described using a real-world motion picture case study—Lord of the Rings.
Convention Paper 6318

2:00 pm
S-2
5-1 Surround Sound Productions with Multiformat HDTV ProgramsKazutugu Uchimura, Hiroshi Kouchi, Shinichiro Ogata, NHK Broadcasting Center, Tokyo, Japan
In the international co-production of HD (high definition) programs, there are some problems in postproduction. These problems originate in the frame-rate relationship between 24 p and 23.976 p. Generally, 23.976 p shooting is used to ensure compatibility with TV systems such as NTSC, but some problems occur when transferring to the PAL system or films. In this paper, in a co-production with China called “The Ancient Routes of Tea & Horses,” we accomplished 5.1 surround sound that was compatible with both 24 p and 59.94 i HD images. This paper describes the production techniques and problems, and some future challenges. We believe the techniques will be useful for media combinations in the future.
Convention Paper 6319

2:30 pm
S-3
PC-Based Sound Reproduction System Linked to Virtual Environments Rendered by VRMLKentaro Matsui, Hiroyuki Okubo, Setsu Komiyama, NHK Science &Technical Research Laboratories, Tokyo, Japan
A PC-based sound reproduction system, called PC-VRAS control, has been developed that is linked to virtual environments rendered by Virtual Reality Modeling Language (VRML) and provides spatially synchronized three-dimensional sound with the VRML scene. A listener can explore the VRML scene at will. The surrounding sound is synchronized and automatically resynthesized with each step taken by the listener in real time. This control is built on ActiveX technologies and runs on the Internet Explorer browser window. All processing is done by software that runs on a standard personal computer, so there is no need for any special device.
Convention Paper 6320

3:00 pm
S-4
A Headphone-Free Head-Tracked Audio Telepresence SystemNorman Jouppi, Subu Iyer, April Slayden, Hewlett Packard, Palo Alto, CA, USA
We have developed a headphones-free bidirectional immersive audio telepresence system. The primary user of the system experiences four-channel audio from a remote location while sitting or standing in a 360-degree surround projection display cube. The display cube incorporates numerous acoustic enhancements, including tilted screens, an anechoic ceiling, and speakers ported through slits in the display cube edges. Head-tracking based on near-infrared video technology obtains both the user’s head position and orientation. Users can then vary the orientation of their projected voice at the remote location merely by rotating their own head. Similarly, the arrival time and volume of sound channels transmitted from the remote location is varied automatically in the display cube based on the position of the user’s head, to help maintain proper perceived interaural time and level differences between multiple channels.
Convention Paper 6321


Back to AES 117th Convention Back to AES Home Page


(C) 2004, Audio Engineering Society, Inc.