The main challenge of cross-modal retrieval is how to efficiently realize cross-modal semantic alignment and reduce the heterogeneity gap. However, existing approaches either ignore the multigrained semantic knowledge learning from… Click to show full abstract
The main challenge of cross-modal retrieval is how to efficiently realize cross-modal semantic alignment and reduce the heterogeneity gap. However, existing approaches either ignore the multigrained semantic knowledge learning from different modalities, or fail to learn consistent relation distributions of semantic details in multimodal instances. To this end, this article proposes a novel end-to-end cross-modal representation method, termed as deep multigraph-based hierarchical enhanced semantic representation (MG-HESR). This method is an integration of MG-HESR with cross-modal adversarial learning, which captures multigrained semantic knowledge from cross-modal samples and realizes fine-grained semantic relation distribution alignment, and then generates modalities-invariant representations in a common subspace. To evaluate the performance, extensive experiments are conducted on four benchmarks. The experimental results show that our method is superior than the state-of-the-art methods.
               
Click one of the above tabs to view related content.