Recently, quantization has been an effective technique for large-scale image retrieval, which can encode feature vectors into compact codes. However, it is still a great challenge to improve the discriminative… Click to show full abstract
Recently, quantization has been an effective technique for large-scale image retrieval, which can encode feature vectors into compact codes. However, it is still a great challenge to improve the discriminative capability of codewords while minimizing the quantization error. This letter proposes Dual Distance Optimized Deep Quantization (D2ODQ) to deal with this issue, by minimizing the Euclidean distance between samples and codewords, and maximizing the minimum cosine distance between codewords. To generate the evenly distributed codebook, we find the general solution for the upper bound of the minimum cosine distance between codewords. Moreover, scaler constrained semantics-preserving loss is considered to avoid trivial quantization boundary, and ensure that a codeword can only quantize the features of one category. In contrast to state-of-the-art methods, our method has a better performance on three benchmark datasets.
               
Click one of the above tabs to view related content.