Voice activity detection from gaze in video mediated communication
Michal Hradis, Shahram Eivazi, Roman Bednarik
Proceedings of the Symposium on Eye Tracking Research and Applications, 2012, pp. 329--332.
Abstract: This paper discusses estimation of active speaker in multi-party video-mediated communication from gaze data of one of the participants. In the explored settings, we predict voice activity of participants in one room based on gaze recordings of a single participant in another room. The two rooms were connected by high definition, low delay audio and video links and the participants engaged in different activities ranging from casual discussion to simple problem-solving games. We treat the task as a classification problem. We evaluate several types of features and parameter settings in the context of Support Vector Machine classification framework. The results show that using the proposed approach vocal activity of a speaker can be correctly predicted in 89% of the time for which the gaze data are available.
Article URL: http://doi.acm.org/10.1145/2168556.2168628
BibTeX format:
@inproceedings{10.1145-2168556.2168628,
  author = {Michal Hradis and Shahram Eivazi and Roman Bednarik},
  title = {Voice activity detection from gaze in video mediated communication},
  booktitle = {Proceedings of the Symposium on Eye Tracking Research and Applications},
  pages = {329--332},
  year = {2012},
}
Search for more articles by Michal Hradis.
Search for more articles by Shahram Eivazi.
Search for more articles by Roman Bednarik.

Return to the search page.


graphbib: Powered by "bibsql" and "SQLite3."