Abstract In recent years, hashing based cross-modal retrieval methods have attracted considerable attention, due to the significant reduction in computational cost and storage consumption. Most previous cross-modal hashing methods usually… Click to show full abstract
Abstract In recent years, hashing based cross-modal retrieval methods have attracted considerable attention, due to the significant reduction in computational cost and storage consumption. Most previous cross-modal hashing methods usually assume that data examples in different modalities are fully-paired. However, they neglect the fact that the data are often unpaired without one-to-one corresponding relationships in practical applications. Though several methods have noted the semi-paired or partial paired scenario, they ignore the completely unpaired scenario. In this paper, we propose a novel cross-modal hashing method, named Unpaired Cross-Modal Hashing (UCMH) for cross-modal retrieval to address the data with completely unpaired relationships. It leverages matrix factorization, similarity preservation, and semantic information to map data of different modalities to their respective semantic spaces. Moreover, different from most previous approaches, we construct an affinity matrix to bridge the semantic gap of data in different semantic spaces, which allows our method to handle single-label and multi-label unpaired cases simultaneously. Extensive experiments on one single-label dataset Wiki and two multi-label datasets namely MIR Flickr and NUS-WIDE, demonstrate that UCMH outperforms the state-of-the-art cross-modal hashing methods.
               
Click one of the above tabs to view related content.