Emerging deep learning-based methods have enabled great progress in automatic neuron segmentation from Electron Microscopy (EM) volumes. However, the success of existing methods is heavily reliant upon a large number… Click to show full abstract
Emerging deep learning-based methods have enabled great progress in automatic neuron segmentation from Electron Microscopy (EM) volumes. However, the success of existing methods is heavily reliant upon a large number of annotations that are often expensive and time-consuming to collect due to dense distributions and complex structures of neurons. If the required quantity of manual annotations for learning cannot be reached, these methods turn out to be fragile. To address this issue, in this article, we propose a two-stage, semi-supervised learning method for neuron segmentation to fully extract useful information from unlabeled data. First, we devise a proxy task to enable network pre-training by reconstructing original volumes from their perturbed counterparts. This pre-training strategy implicitly extracts meaningful information on neuron structures from unlabeled data to facilitate the next stage of learning. Second, we regularize the supervised learning process with the pixel-level prediction consistencies between unlabeled samples and their perturbed counterparts. This improves the generalizability of the learned model to adapt diverse data distributions in EM volumes, especially when the number of labels is limited. Extensive experiments on representative EM datasets demonstrate the superior performance of our reinforced consistency learning compared to supervised learning, i.e., up to 400% gain on the VOI metric with only a few available labels. This is on par with a model trained on ten times the amount of labeled data in a supervised manner. Code is available at https://github.com/weih527/SSNS-Net.
               
Click one of the above tabs to view related content.