Conveying Emotions through Facially Animated Avatars in Networked Virtual Environments
Fabian Di Fiore, Peter Quax, Cedric Vanaken, Wim Lamotte, Frank Van Reeth
Motion in Games, June 2008, pp. 222--233.
Abstract: In this paper, our objective is to facilitate the way in which emotion is conveyed through avatars in virtual environments. The established way of achieving this includes the end-user having to manually select his/her emotional state through a text base interface (using emoticons and/or keywords) and applying these pre-defined emotional states on avatars. In contrast to this rather trivial solution, we envisage a system that enables automatic extraction of emotion-related metadata from a video stream, most often originating from a webcam. Contrary to the seemingly trivial solution of sending entire video streams - which is an optimal solution but often prohibitive in terms of bandwidth usage - this metadata extraction process enables the system to be deployed in large-scale environments, as the bandwidth required for the communication channel is severely limited.
Article URL: http://dx.doi.org/10.1007/978-3-540-89220-5_22
BibTeX format:
@incollection{DiFiore:2008:CET,
  author = {Fabian Di Fiore and Peter Quax and Cedric Vanaken and Wim Lamotte and Frank Van Reeth},
  title = {Conveying Emotions through Facially Animated Avatars in Networked Virtual Environments},
  booktitle = {Motion in Games},
  pages = {222--233},
  month = jun,
  year = {2008},
}
Search for more articles by Fabian Di Fiore.
Search for more articles by Peter Quax.
Search for more articles by Cedric Vanaken.
Search for more articles by Wim Lamotte.
Search for more articles by Frank Van Reeth.

Return to the search page.


graphbib: Powered by "bibsql" and "SQLite3."