This letter shows that the sparse state recovery optimization method is equivalent to the well-known Huber M-estimator, and then justifies its robustness to bad data. We derive the total influence… Click to show full abstract
This letter shows that the sparse state recovery optimization method is equivalent to the well-known Huber M-estimator, and then justifies its robustness to bad data. We derive the total influence functions of the Huber M-estimator and the generalized maximum-likelihood (GM)-estimator, and give a formal proof that the Huber M-estimator is vulnerable to bad leverage points while the GM-estimator can handle them. Numerical results carried out on various IEEE systems validate our theoretical results.
               
Click one of the above tabs to view related content.