Facial performance sensing head-mounted display
Hao Li, Laura Trutoiu, Kyle Olszewski, Lingyu Wei, Tristan Trutna, Pei-Lun Hsieh, Aaron Nicholls, Chongyang Ma
In ACM Transactions on Graphics (TOG), 34(4), August 2015.
Abstract: There are currently no solutions for enabling direct face-to-face interaction between virtual reality (VR) users wearing head-mounted displays (HMDs). The main challenge is that the headset obstructs a significant portion of a user's face, preventing effective facial capture with traditional techniques. To advance virtual reality as a next-generation communication platform, we develop a novel HMD that enables 3D facial performance-driven animation in real-time. Our wearable system uses ultra-thin flexible electronic materials that are mounted on the foam liner of the headset to measure surface strain signals corresponding to upper face expressions. These strain signals are combined with a head-mounted RGB-D camera to enhance the tracking in the mouth region and to account for inaccurate HMD placement. To map the input signals to a 3D face model, we perform a single-instance offline training session for each person. For reusable and accurate online operation, we propose a short calibration step to readjust the Gaussian mixture distribution of the mapping before each use. The resulting animations are visually on par with cutting-edge depth sensor-driven facial performance capture systems and hence, are suitable for social interactions in virtual worlds.
@article{10.1145-2766939,
  author = {Hao Li and Laura Trutoiu and Kyle Olszewski and Lingyu Wei and Tristan Trutna and Pei-Lun Hsieh and Aaron Nicholls and Chongyang Ma},
  title  = {Facial performance sensing head-mounted display},
  journal = {ACM Transactions on Graphics (TOG)},
  volume = {34},
  number = {4},
  articleno = {47},
  month = aug,
  year = {2015},
}
Return to the search page.
graphbib: Powered by "bibsql" and "SQLite3."