Influence of stimulus and viewing task types on a learning-based visual saliency model
Binbin Ye, Yusuke Sugano, Yoichi Sato
Proceedings of the Symposium on Eye Tracking Research and Applications, 2014, pp. 271--274.
Abstract: Learning-based approaches using actual human gaze data have been proven to be an efficient way to acquire accurate visual saliency models and attracted much interest in recent years. However, it still remains yet to be answered how different types of stimulus (e.g., fractal images, and natural images with or without human faces) and viewing tasks (e.g., free viewing or a preference rating task) affect learned visual saliency models. In this study, we quantitatively investigate how learned saliency models differ when using datasets collected in different settings (image contextual level and viewing task) and discuss the importance of choosing appropriate experimental settings.
@inproceedings{10.1145-2578153.2578199,
author = {Binbin Ye and Yusuke Sugano and Yoichi Sato},
title = {Influence of stimulus and viewing task types on a learning-based visual saliency model},
booktitle = {Proceedings of the Symposium on Eye Tracking Research and Applications},
pages = {271--274},
year = {2014},
}
Return to the search page.
graphbib: Powered by "bibsql" and "SQLite3."