Image search results refinement via outlier detection using deep contextsˆ—
- Junyang Lu ,
- Jiazhen Zhou ,
- Jingdong Wang ,
- Tao Mei ,
- Xian-Sheng Hua ,
- Shipeng Li
IEEE Conference on Computer Vision and Pattern Recognition (CVPR12) |
Published by IEEE
Visual reranking has become a widely-accepted method to improve traditional text-based image search results. The main principle is to exploit the visual aggregation property of relevant images among top results so as to boost ranking scores of relevant images, by explicitly or implicitly detecting the confident relevant images, and propagating ranking scores among visually similar images. However, such a visual aggregation property does not always hold, and thus these schemes may fail. In this paper, we instead propose to filter out the most probable irrelevant images using deep contexts, which is the extra information that is not limited in the current search results. The deep contexts for each image consist of sets of images that are returned by searches using the queries formed by the textual context of this image. We compare the popularity of this image in the current search results and the deep contexts to check the irrelevance score. Then the irrelevance scores are propagated to the images whose useful textual context is missed. We formulate the two schemes together to reach a Markov random field, which is effectively solved by graph cuts. The key is that our scheme does not rely on the assumption that relevant images are visually aggregated among top results and is based on the observation that an outlier under the current query is likely to be more popular under some other query. After that, we perform graph reranking over filtered results to reorder them. Experimental results on the INRIA dataset show that our proposed method achieves significant improvements over previous approaches.