Airport runway extraction is important for daily maintenance of civil airports, precision strikes at military airports, and safe landing of UAVs. However, different from the typical object extraction tasks, objects… Click to show full abstract
Airport runway extraction is important for daily maintenance of civil airports, precision strikes at military airports, and safe landing of UAVs. However, different from the typical object extraction tasks, objects such as runways, taxiways, and roads share extremely similar attributes in terms of material, texture, and shape, making it challenging to differentiate between them. Besides, the gradients of some runway boundaries change slowly and are difficult to extract accurately. To address these problems, a dual-field-of-view context and boundary perception network (DCBP) is proposed, which can combine long–short-term contexts and boundary information of runways. Specifically, the dual-field-of-view context aggregation (DCA) module can discover semantic representations from two perspectives by exploring the interaction between long-term and short-term contexts. Meanwhile, the detailed features learned from the high-resolution branch are used to guide the boundary perception (BP) module in learning the location of the runway boundaries. In addition, we provide the research community with a precisely labeled dataset, named the airport runway segmentation (ARS) dataset, to advance runway segmentation with remote sensing images. Extensive experiments on the benchmark demonstrated that DCBP achieved more accurate extraction results and obtained sharper boundaries on various airport runways than other methods. The code and dataset are available at https://github.com/weiAI1996/DCBP.
               
Click one of the above tabs to view related content.