Powered by deep learning technology, automatic check-out (ACO) has made great breakthroughs. Nevertheless, because of the complex nature of real scenes, ACO is still an exceedingly testing task in the… Click to show full abstract
Powered by deep learning technology, automatic check-out (ACO) has made great breakthroughs. Nevertheless, because of the complex nature of real scenes, ACO is still an exceedingly testing task in the field of computer vision. Existing methods cannot fully exploit the contextual information, so that the improvement of checkout accuracy is inhibited. In this study, a novel context-guided feature enhancement network (CGFENet) is proposed, in which products are detected in multi-scale features by exploring the global and local context. Specifically, we design three customized modules: Global context learning module (GCLM), local context learning module (LCLM), and attention transfer module (ATM). GCLM is designed for enhancing the feature representation of feature maps by fully exploring global context information, the purpose of LCLM is that interactions between local and global features can be strengthened gradually, and ATM aims to make the model attach more attention to the challenging products. For the purpose of proving the effectiveness of the proposed CGFENet, extensive experiments are conducted on the large-scale retail product checkout dataset. Experimental results indicate that CGFENet accomplishes favorable performance and surpasses state-of-the-art methods. We achieve 85.88% checkout accuracy in the averaged mode, by comparison with 56.68% of the baseline methods.
               
Click one of the above tabs to view related content.