Rotation invariance for dense features inside regions of interest
Priyadarshi Bhattacharya, Marina L. Gavrilova
In The Visual Computer, 30(6-8), June 2014.
Abstract: Interest points have traditionally been favoured over dense features for image retrieval tasks, where the goal is to retrieve images similar to a query image from an image corpus. While interest points are invariant to scale and rotation, their coverage of the image is not adequate for sub-image retrieval problems, where the query image occupies a small part of the corpus image. On the other hand, dense features provide excellent coverage but lack invariance, as they are computed at a fixed scale and orientation. Recently, we proposed a novel technique of combining dense features with interest points (Bhattacharya and Gavrilova, Vis Comput 29(6–8):491–499, 2013) to leverage the benefits of both worlds. This allows dense features to be scale invariant but not rotation invariant. In this paper, we build on this framework by incorporating rotation invariance for dense features and introducing several improvements in the voting and match score computation stages. Our method can produce high-quality recognition results that outperform bag of words even with geometric verification and several state-of-art methods that have considered spatial information. We achieve significant improvements in terms of both search speed and accuracy over (Bhattacharya and Gavrilova, Vis Comput 29(6–8):491–499, 2013). Experiments on Oxford Buildings, Holidays and UKbench datasets reveal that our method is not only robust to viewpoint and scale changes that occur in real-world photographs but also to geometric transformations.
@article{Bhattacharya:2014:RIF,
author = {Priyadarshi Bhattacharya and Marina L. Gavrilova},
title = {Rotation invariance for dense features inside regions of interest},
journal = {The Visual Computer},
volume = {30},
number = {6-8},
pages = {569--578},
month = jun,
year = {2014},
}
Return to the search page.
graphbib: Powered by "bibsql" and "SQLite3."