Attention has been diffusely used in many tasks since it can guide network concentrating on the most important regions of an input pattern. Nevertheless, many advanced works focus on first-order… Click to show full abstract
Attention has been diffusely used in many tasks since it can guide network concentrating on the most important regions of an input pattern. Nevertheless, many advanced works focus on first-order attention design, e.g. channels and spatial attention, but ignore higher-order attention mechanisms. In this work, we propose the Mixed High-Order Attention (MHA) module to model the complex and high-order information in the attention mechanism, which captures the subtle texture and outputs the discriminative attention map. Besides, the region of the convolution is local, which can’t capture global context and long-range dependencies. Therefore, we propose a non-local block to obtain global attention features. We also propose the Mixed High-Order Non-local Attention Network (MHNAN) to improve the richness of attention. Extensive experiments are conducted to demonstrate the superiority of our MHNAN for super-resolution over several state-of-the-art models.
               
Click one of the above tabs to view related content.