Computer Science Colloquium: Ramani Duraiswami, "Spatial Sound"

Monday, October 7, 2013
4:00 p.m.
1115 Computer Science Instructional Center

Department of Computer Science
Fall 2013 Department Colloquium Series

Capturing, Computing, Visualizing and Recreating Spatial Sound

Ramani Duraiswami
Associate Professor

Abstract
The sound field at a point contains information on the spatial origin of the sound, and humans use this information in making sense of the environment. They are able to localize, identify and pay attention to one source among many distracters, or enjoy the spatial ambience of a scene. When humans hear sound, that sound is filtered by the complex scattering that occurs when the sound interacts with the environment and our bodies. This process endows the received sound with cues that are extracted and decoded by the neural system to perceive the world auditorily in three dimensions.

To capture and reproduce this directional information in the sound a spatial representation of the sound is needed, and a means to capture and manipulate the sound in this representation. We have explored two classical mathematical physics based representations of directional sound - in terms of spherical wave functions and in terms of plane wave expansions. We have developed spherical microphone arrays that allow the captured sound to be represented directly in these basis. Plane-wave beamforming allows the sound-field at a point to be visualized as an image, much as a video camera images the light-field at a given point. The registration of the audio images with visual images allows a new way to perform audio-visual scene analysis. Several examples are presented at http://goo.gl/igflH

The complex interaction of the incoming sound with the listener's body (especially their head and ears) is captured by the head related transfer function (HRTF). Because of inter-personal variability in body and ear shape, the HRTF is different for different people, and needs to be characterized and measured to recreate spatial sound scenes over headphones that allows perception of the original scene. Understanding, computing, measuring and exploring the relationship of the HRTF to anthropometry have all been themes of work in my group. An extremely fast HRTF measurement technique which allows measurement in seconds, as opposed to hours has been developed. We have developed fast algorithms for computation of the HRTF using a novel version of the fast multipole method, that is now finding application in many other problem domains. We have also developed methods for sound scene reproduction that incorporate measurement of a scene and reproduction via individualized HRTFs, room modeling, and tracking.

Audience: Graduate  Undergraduate  Faculty  Post-Docs  Alumni 

remind we with google calendar

 

April 2024

SU MO TU WE TH FR SA
31 1 2 3 4 5 6
7 8 9 10 11 12 13
14 15 16 17 18 19 20
21 22 23 24 25 26 27
28 29 30 1 2 3 4
Submit an Event