Mr. Zurek started by tracing the history of video conferencing.† The first practical systems using satellite transmission were developed in the 1950ís, and were first publicly available in the 1970ís.† In 2000 MPEG streaming of video was possible on a wireless phone.†
Another technology that is important to immersive systems was the development of the IPIX Omniview camera in 1991.† This camera has a stationary fish eye lens that observes a 360 degree by 185 degree hemisphere.† The full view is transmitted, and a subset of the captured image is displayed at the receiving end.† The displayed image can be panned and zoomed by directing a virtual camera.† The displayed image is flattened in software, providing an image with little distortion near the center of the picture.
Zurek developed a similar system for the audio signal.† A circular or hemispherical array of microphones captures and transmits the complete sound field.† The observer can pan and zoom the audio signal similarly to how the picture is controlled.†
To simplify the system development, Zurek used an array of miniature cardiod microphones, and limited the bandwidth to that of traditional telephone systems.† This allowed him to use a fairly large spacing between the microphones, and to fit the camera into the center of the microphone array.†
The steering of the array was done primarily by using the microphone closest to the direction of interest and the opposing microphone.† By adding or subtracting signals from the opposing pair of microphones, he was able to obtain the full range of first order directional patterns, ranging from omnidirectional to super-caridioid.† For finer steering control, Zurek would use a weighted sum of the microphone pairs that straddled the direction of interest.†
Zurek described automatic beam steering systems he developed.† Typically the computer will be used to steer the beam towards the loudest sound.† By using non-linear processing, the beam steering signal can more precisely detect the location of the sound source.†
Zurek described how combined mode processing can be more effective.† A microphone array may first detect the general direction of a sound source, then feature identification in the video signal can be used to track the head and shoulders of the speaker person, even through short silent periods.†
He then demonstrated the operation of the system, showing the direction of the beam steering on a computer display while playing the steered array through headphones.† The demonstration system used a circular array of 8 microphones, and did not have a camera.
A question and answer period covered technical details of the developed system, and also diverted into a discussion of the seemingly irrational demand for more complex features and gimmicks on wireless phones.