Automatic vertebrae identification and localization from arbitrary computed tomography (CT) images is challenging. Vertebrae usually share similar morphological appearance. Because of pathology and the arbitrary field-of-view of CT scans, one… Click to show full abstract
Automatic vertebrae identification and localization from arbitrary computed tomography (CT) images is challenging. Vertebrae usually share similar morphological appearance. Because of pathology and the arbitrary field-of-view of CT scans, one can hardly rely on the existence of some anchor vertebrae or parametric methods to model the appearance and shape. To solve the problem, we argue that: 1) one should make use of the short-range contextual information, such as the presence of some nearby organs (if any), to roughly estimate the target vertebrae; and 2) due to the unique anatomic structure of the spine column, vertebrae have fixed sequential order, which provides the important long-range contextual information to further calibrate the results. We propose a robust and efficient vertebrae identification and localization system that can inherently learn to incorporate both the short- and long-range contextual information in a supervised manner. To this end, we develop a multi-task 3-D fully convolutional neural network to effectively extract the short-range contextual information around the target vertebrae. For the long-range contextual information, we propose a multi-task bidirectional recurrent neural network to encode the spatial and contextual information among the vertebrae of the visible spine column. We demonstrate the effectiveness of the proposed approach on a challenging data set, and the experimental results show that our approach outperforms the state-of-the-art methods by a significant margin.
               
Click one of the above tabs to view related content.