LAUSR.org creates dashboard-style pages of related content for over 1.5 million academic articles. Sign Up to like articles & get recommendations!

Automatic tumor segmentation in breast ultrasound images using a dilated fully convolutional network combined with an active contour model

Photo from wikipedia

PURPOSE Due to the low contrast, blurry boundaries, and large amount of shadows in breast ultrasound (BUS) images, automatic tumor segmentation remains a challenging task. Deep learning provides a solution… Click to show full abstract

PURPOSE Due to the low contrast, blurry boundaries, and large amount of shadows in breast ultrasound (BUS) images, automatic tumor segmentation remains a challenging task. Deep learning provides a solution to this problem, since it can effectively extract representative features from lesions and the background in BUS images. METHODS A novel automatic tumor segmentation method is proposed by combining a dilated fully convolutional network (DFCN) with a phase-based active contour (PBAC) model. The DFCN is an improved fully convolutional neural network with dilated convolution in deeper layers, fewer parameters, and batch normalization techniques; and has a large receptive field that can separate tumors from background. The predictions made by the DFCN are relatively rough due to blurry boundaries and variations in tumor sizes; thus, the PBAC model, which adds both region-based and phase-based energy functions, is applied to further improve segmentation results. The DFCN model is trained and tested in dataset 1 which contains 570 BUS images from 89 patients. In dataset 2, a 10-fold support vector machine (SVM) classifier is employed to verify the diagnostic ability using 460 features extracted from the segmentation results of the proposed method. RESULTS Advantages of the present method were compared with three state-of-the-art networks; the FCN-8s, U-net, and dilated residual network (DRN). Experimental results from 170 BUS images show that the proposed method had a Dice Similarity coefficient of 88.97 ± 10.01%, a Hausdorff distance (HD) of 35.54 ± 29.70 pixels, and a mean absolute deviation (MAD) of 7.67 ± 6.67 pixels, which showed the best segmentation performance. In dataset 2, the area under curve (AUC) of the 10-fold SVM classifier was 0.795 which is similar to the classification using the manual segmentation results. CONCLUSIONS The proposed automatic method may be sufficiently accurate, robust, and efficient for medical ultrasound applications.

Keywords: network; segmentation; fully convolutional; tumor segmentation; model; automatic tumor

Journal Title: Medical Physics
Year Published: 2019

Link to full text (if available)


Share on Social Media:                               Sign Up to like & get
recommendations!

Related content

More Information              News              Social Media              Video              Recommended



                Click one of the above tabs to view related content.