Computer-aided diagnosis based on deep learning is progressively deployed for the analysis of medical images, yet poor robustness and generalization of the model pose a challenge for clinical application. In… Click to show full abstract
Computer-aided diagnosis based on deep learning is progressively deployed for the analysis of medical images, yet poor robustness and generalization of the model pose a challenge for clinical application. In addition, the lack of large amount of training data aggravates this problem. To mitigate this issue, we investigate and research from the transfer learning (i.e., pretraining-fine-tuning) perspective. More important, unlike the traditional setting that transfers knowledge from the natural image domain to the medical image domain, we find that the knowledge from a similar domain can further boost the model’s robustness and generalization. In this article, we propose a generalized lung segmentation framework, which includes two parts: 1) an unsupervised tilewise autoencoder (T-AE) pretraining architecture for learning meaningful and transferable knowledge and 2) a reconstruction network regularized segmentation model for fine-tuning. The experiments are conducted on several chest X-ray datasets. Our results show an accurate lung segmentation with 96.95%, 97.19%, and 95.77% Dice coefficient index on the Montgomery County (MC) chest X-ray dataset, Shenzhen (SH) chest X-ray dataset, and Japanese Society of Radiological Technology (JSRT) database, respectively. Moreover, quantitative and qualitative results demonstrate the superior model robustness of the proposed method to data corruption and high generalization performance on unseen datasets, especially when the training data are limited.
               
Click one of the above tabs to view related content.