SemantiCode: using content similarity and database-driven matching to code wearable eyetracker gaze data
Daniel F. Pontillo, Thomas B. Kinsman, Jeff B. Pelz
Proceedings of the 2010 Symposium on Eye-Tracking Research & Applications, 2010, pp. 267--270.
Abstract: Laboratory eyetrackers, constrained to a fixed display and static (or accurately tracked) observer, facilitate automated analysis of fixation data. Development of wearable eyetrackers has extended environments and tasks that can be studied at the expense of automated analysis. Wearable eyetrackers provide 2D point-of-regard (POR) in scene-camera coordinates, but the researcher is typically interested in some high-level semantic property (e.g., object identity, region, or material) surrounding individual fixation points. The synthesis of POR into fixations and semantic information remains a labor-intensive manual task, limiting the application of wearable eyetracking. We describe a system that segments POR videos into fixations and allows users to train a database-driven, object-recognition system. A correctly trained library results in a very accurate and semi-automated translation of raw POR data into a sequence of objects, regions or materials.
Article URL: http://doi.acm.org/10.1145/1743666.1743729
BibTeX format:
@inproceedings{10.1145-1743666.1743729,
  author = {Daniel F. Pontillo and Thomas B. Kinsman and Jeff B. Pelz},
  title = {SemantiCode: using content similarity and database-driven matching to code wearable eyetracker gaze data},
  booktitle = {Proceedings of the 2010 Symposium on Eye-Tracking Research & Applications},
  pages = {267--270},
  year = {2010},
}
Search for more articles by Daniel F. Pontillo.
Search for more articles by Thomas B. Kinsman.
Search for more articles by Jeff B. Pelz.

Return to the search page.


graphbib: Powered by "bibsql" and "SQLite3."