In recent years, considerable progress has been made in ship detection in synthetic aperture radar (SAR) images; however, no research has been conducted on translating SAR ship images into flexible… Click to show full abstract
In recent years, considerable progress has been made in ship detection in synthetic aperture radar (SAR) images; however, no research has been conducted on translating SAR ship images into flexible and accurate sentences. To explore image captions in SAR ship images, we conduct the following work: first, to better describe SAR ship images, we propose certain principles for SAR image annotation based on the characteristics of SAR images. Second, to make better use of SAR ship images, a large-scale SAR ship image captioning dataset is carefully constructed. Finally, we explore encoder–decoder models and the attention mechanism and apply these methods to the SAR ship image captioning task. We conduct detailed experiments on the proposed dataset and find that the encoder–decoder model and attention mechanism can obtain good results in the SAR ship image captioning task. The experiments also reveal that the generated sentences can accurately describe SAR ship images. This dataset has already been published on https://github.com/5132210/SSIC.git.
               
Click one of the above tabs to view related content.