Facial retargeting using neural networks
Timothy Costigan, Mukta Prasad, Rachel McDonnell
Motion in Games, November 2014, pp. 31--38.
Abstract: Mapping the motion of an actor's face to a virtual model is a difficult but important problem, especially as fully animated characters are becoming more common in games and movies. Many methods have been proposed but most require the source and target to be structurally similar. Optical motion capture markers and blendshape weights are an example of topologically incongruous source and target examples that do not have a simple mapping between one another. In this paper, we created a system capable of determining this mapping through supervised learning of a small training dataset. Radial Basis Function Networks (RBFNs) have been used for retargeting markers to blendshape weights before but to our knowledge Multi-Layer Perceptron Artificial Neural Networks (referred to as ANNs) have not been employed in this way. We hypothesized that ANNs would result in a superior retargeting solution compared to the RBFN, due to their theoretically greater representational power. We implemented a retargeting system using ANNs and RBFNs for comparison. Our results found that both systems produced similar results (figure 1) and in some cases the ANN proved to be more expressive although the ANN was more difficult to work with.
Article URL: http://dx.doi.org/10.1145/2668064.2668099
BibTeX format:
@inproceedings{Costigan:2014:FRU,
  author = {Timothy Costigan and Mukta Prasad and Rachel McDonnell},
  title = {Facial retargeting using neural networks},
  booktitle = {Motion in Games},
  pages = {31--38},
  month = nov,
  year = {2014},
}
Search for more articles by Timothy Costigan.
Search for more articles by Mukta Prasad.
Search for more articles by Rachel McDonnell.

Return to the search page.


graphbib: Powered by "bibsql" and "SQLite3."