Context aware, multimodal, and semantic rendering engine
Patrick Salamin, Daniel Thalmann, Frederic Vexo
Proceedings of the 8th International Conference on Virtual Reality Continuum and its Applications in Industry, 2009, pp. 11--16.
Abstract: Nowadays, several techniques exist to render digital content such as graphics, audio, haptic, etc. Unfortunately, they require different faculties that cannot always be applied, e.g. providing a picture to a blind person would be useless. In this paper, we present a new multimodal rendering engine with a server web-connected to other devices to perform ubiquitous computing. In order to take advantage of user capabilities, we defined an ontology populated with the following elements: user, device, and information. Our system, with the help of this ontology, aims to select and launch automatically a rendering application. Several test case applications were implemented to render shape, text, and video information via audio, haptic, and sight channels. Validations demonstrate that our system is flexible, easily extensible, and shows promise.
Article URL: http://doi.acm.org/10.1145/1670252.1670257
BibTeX format:
@inproceedings{10.1145-1670252.1670257,
  author = {Patrick Salamin and Daniel Thalmann and Frederic Vexo},
  title = {Context aware, multimodal, and semantic rendering engine},
  booktitle = {Proceedings of the 8th International Conference on Virtual Reality Continuum and its Applications in Industry},
  pages = {11--16},
  year = {2009},
}
Search for more articles by Patrick Salamin.
Search for more articles by Daniel Thalmann.
Search for more articles by Frederic Vexo.

Return to the search page.


graphbib: Powered by "bibsql" and "SQLite3."