Sound Rendering in Dynamic Environments with Occlusions
Nicholas Tsingos, Jean-Dominique Gascuel
Graphics Interface '97, May 1997, pp. 9--16.
Abstract: With the development of virtual reality systems and multi-modal simulations, soundtrack generation is becoming a significant issue in computer graphics. In the context of computer generated animation, many more parameters than the sole object geometry as well as specific events can be used to generate, control and render a sound- track that fits the object motions. Producing a convincing soundtrack involves the rendering of the interactions of sound with the dynamic environment: in particular sound reflections and sound absorption due to partial occlusions, usually implying an unacceptable computational cost. We present an integrated approach to sound and image rendering in a computer animation context, which allows the animator to recreate the process of sound recording, while 'physical effects' are automatically computed. Moreover, our sound rendering process efficiently combines a sound reflection model and an attenuation model due to scattering/diffraction by partial occluders, through the use of graphics hardware allowing for interactive computation rates.
Keyword(s): animation, multi-modal simulation, virtual acoustics
@inproceedings{Tsingos:1997:SRI,
author = {Nicholas Tsingos and Jean-Dominique Gascuel},
title = {Sound Rendering in Dynamic Environments with Occlusions},
booktitle = {Graphics Interface '97},
pages = {9--16},
month = may,
year = {1997},
}
Return to the search page.
graphbib: Powered by "bibsql" and "SQLite3."