Remote sensing scene classification (RSSC) attempts to label an image with a specific scene category. Recently, convolutional neural networks (CNNs) have shown the powerful feature extraction capability to combine local… Click to show full abstract
Remote sensing scene classification (RSSC) attempts to label an image with a specific scene category. Recently, convolutional neural networks (CNNs) have shown the powerful feature extraction capability to combine local and global features. However, both the local and global features are extracted independently, which ignore the complementary representation. In this letter, a local–global mutual learning (LML) method is proposed to capture both the global and local features. Specifically, local regions are first generated by highlighting the semantic areas in the corresponding original image. Then, a two-branch architecture is used to extract features for the local regions and global image, respectively. Both the classification loss and mutual learning loss are exploited to train the local–global branches simultaneously, which constrain the two branches to promote each other. Experiments on two popular datasets demonstrate the effectiveness of the proposed method.
               
Click one of the above tabs to view related content.