Cross-modal retrieval aims to search the semantically similar instances from the other modalities given a query from one modality. However, the differences of the distributions and representations between different modalities… Click to show full abstract
Cross-modal retrieval aims to search the semantically similar instances from the other modalities given a query from one modality. However, the differences of the distributions and representations between different modalities make that the similarity of different modalities can not be measured directly. To address this problem, in this paper, we propose a novel semantic consistent adversarial cross-modal retrieval (SC-ACMR), which learns semantic consistent representation for different modalities under adversarial learning framework by considering the semantic similarity from intra-modality and inter-modality. Specifically, for intra-modality, we minimize the intra-class distances. For the inter-modality, we require class center of different modalities with same semantic label to be as close as possible, and also minimize the distances between the samples and the class center with same semantic label from different modalities. Furthermore, we preserve the semantic similarity of transformed features of different modalities through a semantic similarity matrix. Comprehensive experiments on two benchmark datasets are conducted and the experimental results show that the proposed method have learned more compact semantic representations and achieved better performance than many existing methods in cross-modal retrieval.
               
Click one of the above tabs to view related content.