In the past decade, sparsity-driven regularization has led to the advancement of image reconstruction algorithms. Traditionally, such regularizers rely on analytical models of sparsity [e.g., total variation (TV)]. However, more recent… Click to show full abstract
In the past decade, sparsity-driven regularization has led to the advancement of image reconstruction algorithms. Traditionally, such regularizers rely on analytical models of sparsity [e.g., total variation (TV)]. However, more recent methods are increasingly centered around data-driven arguments inspired by deep learning. In this letter, we propose to generalize TV regularization by replacing the $\ell _1$ -penalty with an alternative prior that is trainable. Specifically, our method learns the prior via extending the recently proposed fast parallel proximal algorithm to incorporate data-adaptive proximal operators. The proposed framework does not require additional inner iterations for evaluating the proximal mappings of the corresponding learned prior. Moreover, our formalism ensures that the training and reconstruction processes share the same algorithmic structure, making the end-to-end implementation intuitive. As an example, we demonstrate our algorithm on the problem of deconvolution in a fluorescence microscope.
               
Click one of the above tabs to view related content.