Semantic labeling in remote sensing images is an important and challenging technique, which has attracted increasing attention recently in earth detection, environmental protection, land utilization, and so on. However, it… Click to show full abstract
Semantic labeling in remote sensing images is an important and challenging technique, which has attracted increasing attention recently in earth detection, environmental protection, land utilization, and so on. However, it remains a challenge on how to effectively label objects with varied scales and similar textures in literature. Addressing this challenge, we propose a multidepth convolution network with shallow-deep feature integration, called MCFINet, which could effectively integrate multiscale contexts and shallow-layer/deep-layer features for labeling various objects. In the proposed network, we design two new modules—a multidepth convolutional module (MDCM) and an adaptive feature integration module (AFIM). The MDCM employs multilayer convolutions with varied layer numbers but fixed small-sized kernels in parallel to capture multiscale contexts, while the AFIM adaptively integrates the shallow-layer and deep-layer features of the proposed network to capture more discriminant features for segmenting objects with similar textures. Extensive experimental results on two benchmark data sets demonstrate that MCFINet could achieve better performances than seven existing methods in most cases.
               
Click one of the above tabs to view related content.