PET is a popular medical imaging modality for various clinical applications, including diagnosis and image-guided radiation therapy. The low-dose PET (LDPET) at a minimized radiation dosage is highly desirable in… Click to show full abstract
PET is a popular medical imaging modality for various clinical applications, including diagnosis and image-guided radiation therapy. The low-dose PET (LDPET) at a minimized radiation dosage is highly desirable in clinic since PET imaging involves ionizing radiation, and raises concerns about the risk of radiation exposure. However, the reduced dose of radioactive tracers could impact the image quality and clinical diagnosis. In this paper, a supervised deep learning approach with a generative adversarial network (GAN) and the cycle-consistency loss, Wasserstein distance loss, and an additional supervised learning loss, named as S-CycleGAN, is proposed to establish a non-linear end-to-end mapping model, and used to recover LDPET brain images. The proposed model, and two recently-published deep learning methods (RED-CNN and 3D-cGAN) were applied to 10% and 30% dose of 10 testing datasets, and a series of simulation datasets embedded lesions with different activities, sizes, and shapes. Besides vision comparisons, six measures including the NRMSE, SSIM, PSNR, LPIPS, SUVmax and SUVmean were evaluated for 10 testing datasets and 45 simulated datasets. Our S-CycleGAN approach had comparable SSIM and PSNR, slightly higher noise but a better perception score and preserving image details, much better SUVmean and SUVmax, as compared to RED-CNN and 3D-cGAN. Quantitative and qualitative evaluations indicate the proposed approach is accurate, efficient and robust as compared to other state-of-the-art deep learning methods.
               
Click one of the above tabs to view related content.