LAUSR.org creates dashboard-style pages of related content for over 1.5 million academic articles. Sign Up to like articles & get recommendations!

Deep learning‐based convolutional neural network for intramodality brain MRI synthesis

Photo from wikipedia

Abstract Purpose The existence of multicontrast magnetic resonance (MR) images increases the level of clinical information available for the diagnosis and treatment of brain cancer patients. However, acquiring the complete… Click to show full abstract

Abstract Purpose The existence of multicontrast magnetic resonance (MR) images increases the level of clinical information available for the diagnosis and treatment of brain cancer patients. However, acquiring the complete set of multicontrast MR images is not always practically feasible. In this study, we developed a state‐of‐the‐art deep learning convolutional neural network (CNN) for image‐to‐image translation across three standards MRI contrasts for the brain. Methods BRATS’2018 MRI dataset of 477 patients clinically diagnosed with glioma brain cancer was used in this study, with each patient having T1‐weighted (T1), T2‐weighted (T2), and FLAIR contrasts. It was randomly split into 64%, 16%, and 20% as training, validation, and test set, respectively. We developed a U‐Net model to learn the nonlinear mapping of a source image contrast to a target image contrast across three MRI contrasts. The model was trained and validated with 2D paired MR images using a mean‐squared error (MSE) cost function, Adam optimizer with 0.001 learning rate, and 120 epochs with a batch size of 32. The generated synthetic‐MR images were evaluated against the ground‐truth images by computing the MSE, mean absolute error (MAE), peak signal‐to‐noise ratio (PSNR), and structural similarity index (SSIM). Results The generated synthetic‐MR images with our model were nearly indistinguishable from the real images on the testing dataset for all translations, except synthetic FLAIR images had slightly lower quality and exhibited loss of details. The range of average PSNR, MSE, MAE, and SSIM values over the six translations were 29.44–33.25 dB, 0.0005–0.0012, 0.0086–0.0149, and 0.932–0.946, respectively. Our results were as good as the best‐reported results by other deep learning models on BRATS datasets. Conclusions Our U‐Net model exhibited that it can accurately perform image‐to‐image translation across brain MRI contrasts. It could hold great promise for clinical use for improved clinical decision‐making and better diagnosis of brain cancer patients due to the availability of multicontrast MRIs. This approach may be clinically relevant and setting a significant step to efficiently fill a gap of absent MR sequences without additional scanning.

Keywords: neural network; brain mri; brain; convolutional neural; deep learning; image

Journal Title: Journal of Applied Clinical Medical Physics
Year Published: 2022

Link to full text (if available)


Share on Social Media:                               Sign Up to like & get
recommendations!

Related content

More Information              News              Social Media              Video              Recommended



                Click one of the above tabs to view related content.