LAUSR.org creates dashboard-style pages of related content for over 1.5 million academic articles. Sign Up to like articles & get recommendations!

Consensus Based Distributed Sparse Bayesian Learning by Fast Marginal Likelihood Maximization

Photo from wikipedia

For swarm systems, distributed processing is of paramount importance and Bayesian methods are preferred for their robustness. Existing distributed sparse Bayesian learning (SBL) methods rely on the automatic relevance determination… Click to show full abstract

For swarm systems, distributed processing is of paramount importance and Bayesian methods are preferred for their robustness. Existing distributed sparse Bayesian learning (SBL) methods rely on the automatic relevance determination (ARD), which involves a computationally complex reweighted l1-norm optimization, or they use loopy belief propagation, which is not guaranteed to converge. Hence, this paper looks into the fast marginal likelihood maximization (FMLM) method to develop a faster distributed SBL version. The proposed method has a low communication overhead, and can be distributed by simple consensus methods. The performed simulations indicate a better performance compared with the distributed ARD version, yet the same performance as the FMLM.

Keywords: fast marginal; distributed sparse; likelihood maximization; marginal likelihood; sparse bayesian; bayesian learning

Journal Title: IEEE Signal Processing Letters
Year Published: 2020

Link to full text (if available)


Share on Social Media:                               Sign Up to like & get
recommendations!

Related content

More Information              News              Social Media              Video              Recommended



                Click one of the above tabs to view related content.