Deep neural networks with different filters or multiple branches have achieved good performance for single superresolution (SR) in recent years. However, they ignore the high-frequency components of the multiscale context… Click to show full abstract
Deep neural networks with different filters or multiple branches have achieved good performance for single superresolution (SR) in recent years. However, they ignore the high-frequency components of the multiscale context information of the low-resolution image. To solve this problem, we propose a fusing attention network based on dilated convolution (DFAN) for SR. Specifically, we first propose a dilated convolutional attention module (DCAM), which captures multiscale contextual information from different regions of LR images by locking multiple regions with different sizes of receptive fields. Then, we propose a multifeature attention block (MFAB), further focus on high-frequency components of multiscale contextual information, and extract more high-frequency features. Experimental results demonstrate that the proposed DFAN achieves performance improvements in terms of visual quality evaluation and quantitative evaluation.
               
Click one of the above tabs to view related content.