Relating words and image segments on multiple layers for effective browsing and retrieval

Description

This work proposes a new method for relating words and image segments by finding semantic coherence between these two cues on multiple layers. The method is based on the matching of visual segment clusters with words on various levels of abstraction. Our purpose here is to ease two main problems encountered in content-based image retrieval, namely, lack of semantic information captured by visual feature-based indexing and difficulty of handling subjectivity of user queries. The method is very promising for effective browsing and retrieval in large image data sets. It supports both target- and category-type browsing and searching schemes as well as textual and/or visual query specifications. Results of experiments on a wide, nonspecific image domain suggests that step by step semantic inference on consecutive layers of image-word association helps to improve accuracy of retrieval and browsing.

Journal

Details 詳細情報について

Report a problem

Back to top