This short paper describes a simple subgradient-based techniques for deriving bounds on the optimal solution value when using the ADMM to solve convex optimization problems. The technique requires a bound… Click to show full abstract
This short paper describes a simple subgradient-based techniques for deriving bounds on the optimal solution value when using the ADMM to solve convex optimization problems. The technique requires a bound on the magnitude of some optimal solution vector, but is otherwise completely general. Some computational examples using LASSO problems demonstrate that the technique can produce steadily converging bounds in situations in which standard Lagrangian bounds yield little or no useful information. A second set of experiments establishes a proof of concept indicating the potential practical usefulness of the bounding technique.
               
Click one of the above tabs to view related content.