To enhance mid–low-resolution ship detection, existing methods generally use image super-resolution (SR) as a preprocessing step and feed the super-resolved images to the detectors. However, these methods only use high-resolution… Click to show full abstract
To enhance mid–low-resolution ship detection, existing methods generally use image super-resolution (SR) as a preprocessing step and feed the super-resolved images to the detectors. However, these methods only use high-resolution (HR) images as ground-truth labels to supervise the training of their SR module but overlook the rich HR information in the detection stage. Inspired by the recent advances in knowledge distillation, in this letter, we design a feature distillation framework to fully exploit the information in ground-truth HR images to handle mid–low-resolution ship detection. Our framework consists of a student network and a teacher network. The student network first super-resolves input images using an SR module and then feeds the super-resolved images to the detection module. The teacher network whose architecture is the same as the student detection module directly takes HR images as input to generate HR feature representation and then distills these HR features to the student network through a distillation loss. Using our feature distillation framework, HR images are not only used as ground-truth labels to train the SR module but also provide “ground-truth” features to train the detection module, which enhances the detection performance of the student network. We apply our framework to several popular detectors, including FCOS, Faster-RCNN, Mask-RCNN, and Cascase-RCNN, and conduct extensive ablation studies to validate its effectiveness and generality. Experimental results on the HRSC2016, DOTA, and NWPU VHR-10 datasets demonstrate that, when applying our framework to Faster-RCNN, our method can outperform several state-of-the-art detection methods in terms of mAP50 and mAP75.
               
Click one of the above tabs to view related content.