Organ segmentation is one of the most important step for various medical image analysis tasks. Recently, semi-supervised learning (SSL) has attracted much attentions by reducing labeling cost. However, most of… Click to show full abstract
Organ segmentation is one of the most important step for various medical image analysis tasks. Recently, semi-supervised learning (SSL) has attracted much attentions by reducing labeling cost. However, most of the existing SSLs neglected the prior shape and position information specialized in the medical images, leading to unsatisfactory localization and non-smooth of objects. In this paper, we propose a novel atlas-based semi-supervised segmentation network with multi-task learning for medical organs, named MTL-ABS3Net, which incorporates the anatomical priors and makes full use of unlabeled data in a self-training and multi-task learning manner. The MTL-ABS3Net consists of two components: an Atlas-Based Semi-Supervised Segmentation Network (ABS3Net) and Reconstruction-Assisted Module (RAM). Specifically, the ABS3Net improves the existing SSLs by utilizing atlas prior, which generates credible pseudo labels in a self-training manner; while the RAM further assists the segmentation network by capturing the anatomical structures from the original images in a multi-task learning manner. Better reconstruction quality is achieved by using MS-SSIM loss function, which further improves the segmentation accuracy. Experimental results from the liver and spleen datasets demonstrated that the performance of our method was significantly improved compared to existing state-of-the-art methods.
               
Click one of the above tabs to view related content.