Existing deep-learning-based pulmonary nodule classification models usually use images and benign-malignant labels as inputs for training. Image attributes of the nodules, as human-nameable high-level semantic labels, are rarely used to… Click to show full abstract
Existing deep-learning-based pulmonary nodule classification models usually use images and benign-malignant labels as inputs for training. Image attributes of the nodules, as human-nameable high-level semantic labels, are rarely used to build a convolutional neural network (CNN). In this paper, a new method is proposed to combine the advantages of two classifications, which are pulmonary nodule benign-malignant classification and pulmonary nodule image attributes classification, into a deep learning network to improve the accuracy of pulmonary nodule classification. For this purpose, a unique 3D CNN is built to learn image attribute and benign-malignant classification simultaneously. A novel loss function is designed to balance the influence of two different kinds of classifications. The CNN is trained by a publicly available lung image database consortium (LIDC) dataset and is tested by a cross-validation method to predict the risk of a pulmonary nodule being malignant. This proposed method generates the accuracy of 91.47%, which is better than many existing models. Experimental findings show that if the CNN is built properly, the nodule attributes classification and benign-malignant classification can benefit from each other. By using nodule attribute learning as a control factor in a deep learning scheme, the accuracy of pulmonary nodule classification can be significantly improved by using a deep learning scheme.
               
Click one of the above tabs to view related content.