LAUSR.org creates dashboard-style pages of related content for over 1.5 million academic articles. Sign Up to like articles & get recommendations!

Lightweight Salient Object Detection in Optical Remote-Sensing Images via Semantic Matching and Edge Alignment

Photo from wikipedia

Recently, relying on convolutional neural networks (CNNs), many methods for salient object detection in optical remote-sensing images (ORSI-SOD) are proposed. However, most methods ignore the number of parameters and computational… Click to show full abstract

Recently, relying on convolutional neural networks (CNNs), many methods for salient object detection in optical remote-sensing images (ORSI-SOD) are proposed. However, most methods ignore the number of parameters and computational cost brought by CNNs, and only a few pay attention to portability and mobility. To facilitate practical applications, in this article, we propose a novel lightweight network for ORSI-SOD based on semantic matching and edge alignment, termed SeaNet. Specifically, SeaNet includes a lightweight MobileNet-V2 for feature extraction, a dynamic semantic matching module (DSMM) for high-level features, an edge self-alignment module (ESAM) for low-level features, and a portable decoder for inference. First, the high-level features are compressed into semantic kernels. Then, semantic kernels are used to activate salient object locations in two groups of high-level features through dynamic convolution operations in DSMM. Meanwhile, in ESAM, cross-scale edge information extracted from two groups of low-level features is self-aligned through $L_{2}$ loss and used for detail enhancement. Finally, starting from the highest level features, the decoder infers salient objects based on the accurate locations and fine details contained in the outputs of the two modules. Extensive experiments on two public datasets demonstrate that our lightweight SeaNet not only outperforms most state-of-the-art lightweight methods, but also yields comparable accuracy with state-of-the-art conventional methods, while having only 2.76 M parameters and running with 1.7 G floating point operations (FLOPs) for $288 \times 288$ inputs. Our code and results are available at https://github.com/MathLee/SeaNet.

Keywords: remote sensing; salient object; level; semantic matching; level features

Journal Title: IEEE Transactions on Geoscience and Remote Sensing
Year Published: 2023

Link to full text (if available)


Share on Social Media:                               Sign Up to like & get
recommendations!

Related content

More Information              News              Social Media              Video              Recommended



                Click one of the above tabs to view related content.