LAUSR.org creates dashboard-style pages of related content for over 1.5 million academic articles. Sign Up to like articles & get recommendations!

Lightweight Self-Attention Residual Network for Hyperspectral Classification

Photo by mybbor from unsplash

Compared with traditional hyperspectral image classification methods, the classification model based on the deep convolutional neural network (DCNN) can achieve higher precision classification. However, the increase in classification accuracy has… Click to show full abstract

Compared with traditional hyperspectral image classification methods, the classification model based on the deep convolutional neural network (DCNN) can achieve higher precision classification. However, the increase in classification accuracy has led to explosive growth in model complexity. In this letter, we proposed a more lightweight and efficient residual structure to alleviate this problem to replace the standard residual structure. This structure uses the “divide and conquer” idea to reduce the number of model parameters and calculations. In addition, the structure introduces a self-attention mechanism so that the input feature map and output feature map can be adaptively fused, and the feature extraction ability of the residual structure is further enhanced. The experimental results reveal that the residual structure we proposed can significantly reduce the complexity of the model and maintain a high classification accuracy, even surpassing the current mainstream classification model.

Keywords: network; structure; self attention; classification; residual structure; model

Journal Title: IEEE Geoscience and Remote Sensing Letters
Year Published: 2022

Link to full text (if available)


Share on Social Media:                               Sign Up to like & get
recommendations!

Related content

More Information              News              Social Media              Video              Recommended



                Click one of the above tabs to view related content.