PURPOSE Breast cancer is the most commonly occurring cancer worldwide. The ultrasound reflectivity imaging technique can be used to obtain breast ultrasound (BUS) images, which can be used to classify… Click to show full abstract
PURPOSE Breast cancer is the most commonly occurring cancer worldwide. The ultrasound reflectivity imaging technique can be used to obtain breast ultrasound (BUS) images, which can be used to classify benign and malignant tumors. However, the classification is subjective and dependent on the experience and skill of operators and doctors. The automatic classification method can assist doctors and improve the objectivity, but current convolution neural network (CNN) is not good at learning global features and vision transform (ViT) is not good at extraction local features. In this study, we proposed an VGG attention vision transformer (VGGA-ViT) network to overcome their disadvantages. METHODS In the proposed method, we used a convolutional neural network (CNN) module to extract the local features and employed a vision transformer (ViT) module to learn the global relationship between different regions and enhance the relevant local features. The CNN module was named the VGG attention (VGGA) module. It was composed of a visual geometry group (VGG) backbone, a feature extraction fully connected layer, and a squeeze-and-excitation (SE) block. Both the VGG backbone and the ViT module were pre-trained on the ImageNet dataset and re-trained using BUS samples in this study. Two BUS datasets were employed for validation. RESULTS Cross-validation was conducted on two BUS datasets. CONCLUSIONS In this study, we proposed the VGGA-ViT for the BUS classification, which was good at learning both local and global features. The proposed network achieved higher accuracy than the compared previous methods. This article is protected by copyright. All rights reserved.
               
Click one of the above tabs to view related content.