Segmentation of the breast ultrasound (BUS) image is an important step for subsequent assessment and diagnosis of breast lesions. Recently, Deep-learning-based methods have achieved satisfactory performance in many computer vision… Click to show full abstract
Segmentation of the breast ultrasound (BUS) image is an important step for subsequent assessment and diagnosis of breast lesions. Recently, Deep-learning-based methods have achieved satisfactory performance in many computer vision tasks, especially in medical image segmentation. Nevertheless, those methods always require a large number of pixel-wise labeled data that is expensive in medical practices. In this study, we propose a new segmentation method by dense prediction and local fusion of superpixels for breast anatomy with scarce labeled data. First, the proposed method generates superpixels from the BUS image enhanced by histogram equalization, a bilateral filter, and a pyramid mean shift filter. Second, using a convolutional neural network (CNN) and distance metric learning-based classifier, the superpixels are projected onto the embedding space and then classified by calculating the distance between superpixels’ embeddings and the centers of categories. By using superpixels, we can generate a large number of training samples from each BUS image. Therefore, the problem of the scarcity of labeled data can be better solved. To avoid the misclassification of the superpixels, $K$ -nearest neighbor (KNN) is used to reclassify the superpixels within every local region based on the spatial relationships among them. Fivefold cross-validation was taken and the experimental results show that our method outperforms several often used deep-learning methods under the condition of the absence of a large number of labeled data (48 BUS images for training and 12 BUS images for testing).
               
Click one of the above tabs to view related content.