Cross-domain image matching, which investigates the problem of searching images across different visual domains such as photo, sketch, or painting, has attracted intensive attention in computer vision due to its… Click to show full abstract
Cross-domain image matching, which investigates the problem of searching images across different visual domains such as photo, sketch, or painting, has attracted intensive attention in computer vision due to its widespread application. Unlike intra-domain matching, cross-domain images appear quite different in various characteristics. This leads to the failure of most existing approaches. However, the great difference between cross-domain images is just like the huge gap between English and Chinese. The two languages are linked up by an English-Chinese translation dictionary. Inspired by this idea, in this paper, we purpose a novel visual vocabulary translator for cross-domain image matching. This translator consists of two main modules: one is a pair of vocabulary trees which can be regarded as the codebooks in their respective fields, whereas the other is the index file based on cross-domain image pair. Through such a translator, a feature from one visual domain can be translated into another. The proposed algorithm is extensively evaluated on two kinds of cross-domain matching tasks, i.e., photo-to-sketch matching and photo-to-painting matching. Experimental results demonstrate that the effectiveness and efficiency of the visual vocabulary translator. And by employing this translator, the proposed algorithm achieves satisfactory performance in different matching systems. Furthermore, our work shows great potential in multiple visual domains.
               
Click one of the above tabs to view related content.