Convolutional neural network (CNN)-based denoisers have been successful in low-dose CT (LDCT) denoising tasks. However, image blurring in the denoised images remains a problem, and it is mainly caused by… Click to show full abstract
Convolutional neural network (CNN)-based denoisers have been successful in low-dose CT (LDCT) denoising tasks. However, image blurring in the denoised images remains a problem, and it is mainly caused by pixel-level losses during the training process. To reduce blur, perceptual loss with ImageNet-pretrained VGG network is widely used, and it improves image quality by preserving the original structural details in CT images. However, the statistics of the natural RGB images in ImageNet are different from those of CT images. For this reason, the features learned by the ImageNet-pretrained model cannot be generalized to represent the features of CT images. In this work, we propose a CT-specific perceptual loss scheme and apply it to train a LDCT denoiser. As the feature extractor for CT images, we develop a CT image classification network that predicts lesion-present or lesion-absent CT images. To improve the representation power of the proposed feature extractor, we adopt the network parameters learned from RGB images through transfer learning. We empirically demonstrate that 1) transfer learning helps improve the representation power of the CT classifier, and 2) use of the CT classifier trained by means of transfer learning as the feature extractor of perceptual loss for denoising resolves the CT number bias due to VGG loss and helps retain the small features and image textures of normal-dose CT (NDCT) images.
               
Click one of the above tabs to view related content.