For simultaneous positron-emission-tomography and magnetic-resonance-imaging (PET-MRI) systems, while early methods relied on independently reconstructing PET and MRI images, recent works have demonstrated improvement in image reconstructions of both PET and… Click to show full abstract
For simultaneous positron-emission-tomography and magnetic-resonance-imaging (PET-MRI) systems, while early methods relied on independently reconstructing PET and MRI images, recent works have demonstrated improvement in image reconstructions of both PET and MRI using joint reconstruction methods. The current state-of-the-art joint reconstruction priors rely on fine-scale PET-MRI dependencies through the image gradients at corresponding spatial locations in the PET and MRI images. In the general context of image restoration, compared to gradient-based models, patch-based models (e.g., sparse dictionaries) have demonstrated better performance by modeling image texture better. Thus, we propose a novel joint PET-MRI patch-based dictionary prior that learns inter-modality higher-order dependencies together with intra-modality textural patterns in the images. We model the joint-dictionary prior as a Markov random field and propose a novel Bayesian framework for joint reconstruction of PET and accelerated-MRI images, using expectation maximization for inference. We evaluate all methods on simulated brain datasets as well as on in vivo datasets. We compare our joint dictionary prior with the recently proposed joint priors based on image gradients, as well as independently applied patch-based priors. Our method demonstrates qualitative and quantitative improvement over the state of the art in both PET and MRI reconstructions.
               
Click one of the above tabs to view related content.