Motion-driven concatenative synthesis of cloth sounds
Steven S. An, Doug L. James, Steve Marschner
In ACM Transactions on Graphics, 31(4), July 2012.
Abstract: We present a practical data-driven method for automatically synthesizing plausible soundtracks for physics-based cloth animations running at graphics rates. Given a cloth animation, we analyze the deformations and use motion events to drive crumpling and friction sound models estimated from cloth measurements. We synthesize a low-quality sound signal, which is then used as a target signal for a concatenative sound synthesis (CSS) process. CSS selects a sequence of microsound units, very short segments, from a database of recorded cloth sounds, which best match the synthesized target sound in a low-dimensional feature-space after applying a hand-tuned warping function. The selected microsound units are concatenated together to produce the final cloth sound with minimal filtering. Our approach avoids expensive physics-based synthesis of cloth sound, instead relying on cloth recordings and our motion-driven CSS approach for realism. We demonstrate its effectiveness on a variety of cloth animations involving various materials and character motions, including first-person virtual clothing with binaural sound.
@article{An:2012:MCS,
author = {Steven S. An and Doug L. James and Steve Marschner},
title = {Motion-driven concatenative synthesis of cloth sounds},
journal = {ACM Transactions on Graphics},
volume = {31},
number = {4},
pages = {102:1--102:10},
month = jul,
year = {2012},
}
Return to the search page.
graphbib: Powered by "bibsql" and "SQLite3."