Learning silhouette features for control of human motion
Liu Ren, Gregory Shakhnarovich, Jessica K. Hodgins, Hanspeter Pfister, Paul Viola
In ACM Transactions on Graphics, 24(4), October 2005.
Abstract: We present a vision-based performance interface for controlling animated human characters. The system interactively combines information about the user's motion contained in silhouettes from three viewpoints with domain knowledge contained in a motion capture database to produce an animation of high quality. Such an interactive system might be useful for authoring, for teleconferencing, or as a control interface for a character in a game. In our implementation, the user performs in front of three video cameras; the resulting silhouettes are used to estimate his orientation and body configuration based on a set of discriminative local features. Those features are selected by a machine-learning algorithm during a preprocessing step. Sequences of motions that approximate the user's actions are extracted from the motion database and scaled in time to match the speed of the user's motion. We use swing dancing, a complex human motion, to demonstrate the effectiveness of our approach. We compare our results to those obtained with a set of global features, Hu moments, and ground truth measurements from a motion capture system.
Keyword(s): Performance animation, animation interface, computer vision, machine-learning, motion capture, motion control
@article{Ren:2005:LSF,
author = {Liu Ren and Gregory Shakhnarovich and Jessica K. Hodgins and Hanspeter Pfister and Paul Viola},
title = {Learning silhouette features for control of human motion},
journal = {ACM Transactions on Graphics},
volume = {24},
number = {4},
pages = {1303--1331},
month = oct,
year = {2005},
}
Return to the search page.
graphbib: Powered by "bibsql" and "SQLite3."