The generalized lasso (GLasso) is an extension of the lasso regression in which there is an $l_{1}$ penalty term (or regularization) of the linearly transformed coefficient vector. Finding the optimal… Click to show full abstract
The generalized lasso (GLasso) is an extension of the lasso regression in which there is an $l_{1}$ penalty term (or regularization) of the linearly transformed coefficient vector. Finding the optimal solution of GLasso is not straightforward since the penalty term is not differentiable. This brief presents a novel one-layer neural network to solve the generalized lasso for a wide range of penalty transformation matrices. The proposed neural network is proven to be stable in the sense of Lyapunov and converges globally to the optimal solution of the GLasso. It is also shown that the proposed neural solution can solve many optimization problems, including sparse and weighted sparse representations, (weighted) total variation denoising, fused lasso signal approximator, and trend filtering. Disparate experiments on the above problems illustrate and confirm the excellent performance of the proposed neural network in comparison to other competing techniques.
               
Click one of the above tabs to view related content.