Segmenting bioimage based filaments is a critical step in a wide range of applications, including neuron reconstruction and blood vessel tracing. To achieve an acceptable segmentation performance, most of the… Click to show full abstract
Segmenting bioimage based filaments is a critical step in a wide range of applications, including neuron reconstruction and blood vessel tracing. To achieve an acceptable segmentation performance, most of the existing methods need to annotate amounts of filamentary images in the training stage. Hence, these methods have to face the common challenge that the annotation cost is usually high. To address this problem, we propose an interactive segmentation method to actively select a few super-pixels for annotation, which can alleviate the burden of annotators. Specifically, we first apply a Simple Linear Iterative Clustering (i.e., SLIC) algorithm to segment filamentary images into compact and consistent super-pixels, and then propose a novel batch-mode based active learning method to select the most representative and informative (i.e., BMRI) super-pixels for pixel-level annotation. We then use a bagging strategy to extract several sets of pixels from the annotated super-pixels, and further use them to build different Laplacian Regularized Gaussian Mixture Models (Lap-GMM) for pixel-level segmentation. Finally, we perform the classifier ensemble by combining multiple Lap-GMM models based on a majority voting strategy. We evaluate our method on three public available filamentary image datasets. Experimental results show that, to achieve comparable performance with the existing methods, the proposed algorithm can save 40 percent annotation efforts for experts.
               
Click one of the above tabs to view related content.