LAUSR.org creates dashboard-style pages of related content for over 1.5 million academic articles. Sign Up to like articles & get recommendations!

Fusing Attention Network Based on Dilated Convolution for Superresolution

Photo by dulhiier from unsplash

Deep neural networks with different filters or multiple branches have achieved good performance for single superresolution (SR) in recent years. However, they ignore the high-frequency components of the multiscale context… Click to show full abstract

Deep neural networks with different filters or multiple branches have achieved good performance for single superresolution (SR) in recent years. However, they ignore the high-frequency components of the multiscale context information of the low-resolution image. To solve this problem, we propose a fusing attention network based on dilated convolution (DFAN) for SR. Specifically, we first propose a dilated convolutional attention module (DCAM), which captures multiscale contextual information from different regions of LR images by locking multiple regions with different sizes of receptive fields. Then, we propose a multifeature attention block (MFAB), further focus on high-frequency components of multiscale contextual information, and extract more high-frequency features. Experimental results demonstrate that the proposed DFAN achieves performance improvements in terms of visual quality evaluation and quantitative evaluation.

Keywords: dilated convolution; network based; attention network; fusing attention; based dilated; attention

Journal Title: IEEE Transactions on Cognitive and Developmental Systems
Year Published: 2023

Link to full text (if available)


Share on Social Media:                               Sign Up to like & get
recommendations!

Related content

More Information              News              Social Media              Video              Recommended



                Click one of the above tabs to view related content.