Toward a higher-level visual representation for object-based image retrieval
Yan-Tao Zheng, Shi-Yong Neo, Tat-Seng Chua, Qi Tian
In The Visual Computer, 25(1), January 2009.
Abstract: We propose a higher-level visual representation, visual synset, for object-based image retrieval beyond visual appearances. The proposed visual representation improves the traditional part-based bag-of-words image representation, in two aspects. First, the approach strengthens the discrimination power of visual words by constructing an intermediate descriptor, visual phrase, from frequently co-occurring visual word-set. Second, to bridge the visual appearance difference or to achieve better intra-class invariance power, the approach clusters visual words and phrases into visual synset, based on their class probability distribution. The rationale is that the distribution of visual word or phrase tends to peak around its belonging object classes. The testing on Caltech-256 data set shows that the visual synset can partially bridge visual differences of images of the same class and deliver satisfactory retrieval of relevant images with different visual appearances.
Keyword(s): Visual representation, Object-based image retrieval
BibTeX format:
@article{Zheng:2009:TAH,
  author = {Yan-Tao Zheng and Shi-Yong Neo and Tat-Seng Chua and Qi Tian},
  title = {Toward a higher-level visual representation for object-based image retrieval},
  journal = {The Visual Computer},
  volume = {25},
  number = {1},
  pages = {13--23},
  month = jan,
  year = {2009},
}
Search for more articles by Yan-Tao Zheng.
Search for more articles by Shi-Yong Neo.
Search for more articles by Tat-Seng Chua.
Search for more articles by Qi Tian.

Return to the search page.


graphbib: Powered by "bibsql" and "SQLite3."