We propose a promising framework for distributed sparse optimization based on weakly convex regularizers. More specifically, we pose two distributed optimization problems to recover sparse signals in networks. The first… Click to show full abstract
We propose a promising framework for distributed sparse optimization based on weakly convex regularizers. More specifically, we pose two distributed optimization problems to recover sparse signals in networks. The first problem formulation relies on statistical properties of the signals, and it uses an approximate Moreau enhanced penalty. In contrast, the second formulation does not rely on any statistical assumptions, and it uses an additional consensus promoting penalty (CPP) that convexifies the cost function over the whole network. To solve both problems, we propose a distributed proximal debiasing-gradient (DPD) method, which uses the exact first-order proximal gradient algorithm. The DPD method features a pair of proximity operators that play complementary roles: one sparsifies the estimate, and the other reduces the bias caused by the sparsification. Owing to the overall convexity of the whole cost functions, the proposed method guarantees convergence to a global minimizer, as demonstrated by numerical examples. In addition, the use of CPP improves the convergence speed significantly.
               
Click one of the above tabs to view related content.