Recently, deep convolutional neural networks (CNNs) have provided us an effective tool for automated polyp segmentation in colonoscopy images. However, most CNN-based methods do not fully consider the feature interaction… Click to show full abstract
Recently, deep convolutional neural networks (CNNs) have provided us an effective tool for automated polyp segmentation in colonoscopy images. However, most CNN-based methods do not fully consider the feature interaction among different layers and often cannot provide satisfactory segmentation performance. In this article, a novel attention-guided pyramid context network (APCNet) is proposed for accurate and robust polyp segmentation in colonoscopy images. Specifically, considering that different network layers represent the polyp in different aspects, APCNet first extracts multilayer features in a pyramid structure, and then uses an attention-guided multilayer aggregation strategy to refine the context features of each layer using the complementary information of different layers. To obtain abundant context features, APCNet uses a context extraction module (CEM) that explores the context information of each layer via local information retainment and global information compaction. Through the top-down deep supervision, our APCNet implements a coarse-to-fine polyp segmentation and finally localizes the polyp region precisely. Extensive experiments on two in-domain and four out-of-domain experiments show that APCNet is comparable to 19 state-of-the-art methods. Moreover, it holds a more appropriate tradeoff between effectiveness and computational complexity than these competing methods.
               
Click one of the above tabs to view related content.