Since the advent of deep convolutional neural networks (DNNs), computer vision has seen an extremely rapid progress that has led to huge advances in medical imaging. Every year, many new… Click to show full abstract
Since the advent of deep convolutional neural networks (DNNs), computer vision has seen an extremely rapid progress that has led to huge advances in medical imaging. Every year, many new methods are reported at conferences such as the International Conference on Medical Image Computing and Computer-Assisted Intervention and Machine Learning for Medical Image Reconstruction, or published online at the preprint server arXiv. There is a plethora of surveys on applications of neural networks in medical imaging (see [1] for a relatively recent comprehensive survey). This article does not aim to cover all aspects of the field, but focuses on a particular topic, image-to-image translation. Although the topic may not sound familiar, it turns out that many seemingly irrelevant applications can be understood as instances of image-to-image translation. Such applications include (1) noise reduction, (2) super-resolution, (3) image synthesis, and (4) reconstruction. The same underlying principles and algorithms work for various tasks. Our aim is to introduce some of the key ideas on this topic from a uniform viewpoint. We introduce core ideas and jargon that are specific to image processing by use of DNNs. Having an intuitive grasp of the core ideas of applications of neural networks in medical imaging and a knowledge of technical terms would be of great help to the reader for understanding the existing and future applications. Most of the recent applications which build on image-to-image translation are based on one of two fundamental architectures, called pix2pix and CycleGAN, depending on whether the available training data are paired or unpaired (see Sect. 1.3). We provide codes ([2, 3]) which implement these two architectures with various enhancements. Our codes are available online with use of the very permissive MIT license. We provide a hands-on tutorial for training a model for denoising based on our codes (see Sect. 6). We hope that this article, together with the codes, will provide both an overview and the details of the key algorithms and that it will serve as a basis for the development of new applications.
               
Click one of the above tabs to view related content.