Recently, the online retrieval task has been receiving widespread attention, which is closely related to many real-world applications. However, existing online retrieval methods based on hashing suffer from two main… Click to show full abstract
Recently, the online retrieval task has been receiving widespread attention, which is closely related to many real-world applications. However, existing online retrieval methods based on hashing suffer from two main problems: a) the models tend to be biased towards the current streaming data due to unavailable history streaming data; b) when new streaming data comes in and the hashing functions have been updated, all history binary codes should be recomputed, which takes much computation burden. To address the above two issues, we propose a novel Online Residual Quantization (ORQ) method that can achieve efficient streaming data quantization via the small-scale residual quantization codebooks. For the first problem, we design a residual quantization module by learning multiple residual codebooks to quantize the float streaming data, which effectively reduces the quantization error and enables the binary codes to be easily reconstructed back to original float data. Then, with the reconstructed history data, a balanced affinity matrix is developed to model the semantic relationship, e.g., similarity and difference, between the history and current data distributions, which can prevent the model from being biased towards the current data distribution. For the second problem, when inputting current streaming data, only the residual codebooks should be updated, instead of the whole history binary codes in hashing-based methods, which significantly reduces the computation burden. Comprehensive experiments on six benchmarks demonstrate that ORQ yields significant improvements (i.e., 1.2% $\sim$ 4.9% in average mAP) compared to the state-of-the-art methods.
               
Click one of the above tabs to view related content.