Automatic analysis of 3D gaze coordinates on scene objects using data from eye-tracking and motion-capture systems
Kai Essig, Daniel Dornbusch, Daniel Prinzhorn, Helge Ritter, Jonathan Maycock, Thomas Schack
Proceedings of the Symposium on Eye Tracking Research and Applications, 2012, pp. 37--44.
Abstract: We implemented a system, called the VICON-EyeTracking Visualizer, that combines mobile eye tracking data with motion capture data to calculate and visualize the 3D gaze vector within the motion capture co-ordinate system. To ensure that both devices were temporally synchronized we used previously developed software by us. By placing reflective markers on objects in the scene, their positions are known and by spatially synchronizing both the eye tracker and the motion capture system allows us to automatically compute how many times and where fixations occur, thus overcoming the time consuming and error-prone disadvantages of the traditional manual annotation process. We evaluated our approach by comparing its outcome for a simple looking task and a more complex grasping task against the average results produced by the manual annotation process. Preliminary data reveals that the program only differed from the average manual annotation results by approximately 3 percent in the looking task with regard to the number of fixations and cumulative fixation duration on each point in the scene. In case of the more complex grasping task the results depend on the object size: for larger objects there was good agreement (less than 16 percent (or 950ms)), but this degraded for smaller objects, where there are more saccades towards object boundaries. The advantages of our approach are easy user calibration, the ability to have unrestricted body movements (due to the mobile eye-tracking system), and that it can be used with any wearable eye tracker and marker based motion tracking system. Extending existing approaches, our system is also able to monitor fixations on moving objects. The automatic analysis of gaze and movement data in complex 3D scenes can be applied to a variety of research domains, i. e., Human Computer Interaction, Virtual Reality or grasping and gesture research.
Article URL: http://doi.acm.org/10.1145/2168556.2168561
BibTeX format:
@inproceedings{10.1145-2168556.2168561,
  author = {Kai Essig and Daniel Dornbusch and Daniel Prinzhorn and Helge Ritter and Jonathan Maycock and Thomas Schack},
  title = {Automatic analysis of 3D gaze coordinates on scene objects using data from eye-tracking and motion-capture systems},
  booktitle = {Proceedings of the Symposium on Eye Tracking Research and Applications},
  pages = {37--44},
  year = {2012},
}
Search for more articles by Kai Essig.
Search for more articles by Daniel Dornbusch.
Search for more articles by Daniel Prinzhorn.
Search for more articles by Helge Ritter.
Search for more articles by Jonathan Maycock.
Search for more articles by Thomas Schack.

Return to the search page.


graphbib: Powered by "bibsql" and "SQLite3."