Cross-media search from large-scale social network big data has become increasingly valuable in our daily life because it can support querying different data modalities. Deep hash networks have shown high… Click to show full abstract
Cross-media search from large-scale social network big data has become increasingly valuable in our daily life because it can support querying different data modalities. Deep hash networks have shown high potential in achieving efficient and effective cross-media search performance. However, due to the fact that social network data often exhibit text sparsity, diversity, and noise characteristics, the search performance of existing methods often degrades when dealing with this data. In order to address this problem, this article proposes a novel end-to-end cross-media semantic correlation learning model based on a deep hash network and semantic expansion for social network cross-media search (DHNS). The approach combines deep network feature learning and hash-code quantization learning for multimodal data into a unified optimization architecture, which successfully preserves both intramedia similarity and intermedia correlation, by minimizing both cross-media correlation loss and binary hash quantization loss. In addition, our approach realizes semantic relationship expansion by constructing the image–word relation graph and mining the potential semantic relationship between images and words, and obtaining the semantic embedding based on both internal graph deep walk and an external knowledge base. Experimental results demonstrate that DHNS yields better cross-media search performance on standard benchmarks.
               
Click one of the above tabs to view related content.