LAUSR.org creates dashboard-style pages of related content for over 1.5 million academic articles. Sign Up to like articles & get recommendations!

Region‐based estimation of the partition functions for hybrid Bayesian network models

Photo by tompeham from unsplash

The partition function Ƶ ${\boldsymbol{Ƶ}}$ is a normalization constant for normalizing all the distributions in probabilistic inference. Ƶ ${\boldsymbol{Ƶ}}$ is closely related to the log probability of evidence ( log… Click to show full abstract

The partition function Ƶ ${\boldsymbol{Ƶ}}$ is a normalization constant for normalizing all the distributions in probabilistic inference. Ƶ ${\boldsymbol{Ƶ}}$ is closely related to the log probability of evidence ( log  p ( e ) $\mathrm{log}\unicode{x0200A}p(e)$ ) for Bayesian networks (BNs), which plays an important role in many applications, such as parameter learning, classification, and clustering. However, evaluating Ƶ ${\boldsymbol{Ƶ}}$ and log  p ( e ) $\mathrm{log}\unicode{x0200A}p(e)$ for BNs containing both discrete and continuous variables (known as hybrid Bayesian networks [HBNs]) is generally difficult for analytical and simulation‐based solutions. This paper describes the work addressing the problem of estimating the partition function for HBNs when exact methods become inefficient or numerically unstable. We use dynamic discretization to provide a discretized version of the model and then perform region‐based free energy optimization to obtain the log  p ( e ) $\mathrm{log}\unicode{x0200A}p(e)$ for the HBN efficiently, which is called the DDJT‐region partition function (RPR) algorithm. It is novel since the region‐based methods have previously only been applied to discrete BNs. We show that the log  p ( e ) $\mathrm{log}\unicode{x0200A}p(e)$ obtained by DDJT‐RPR is exact for discretized models and with space complexity equal to that in marginal inference, thus generally applicable to a wide range of applications. To demonstrate these properties, we combine the DDJT‐RPR algorithm with an improved expectation‐maximization algorithm to learn Gaussian mixture models (GMMs), for applications requiring data only and with prior knowledge, which performed more robustly than conventional GMM learning algorithms.

Keywords: partition; mathrm log; log; log mathrm; region based

Journal Title: International Journal of Intelligent Systems
Year Published: 2022

Link to full text (if available)


Share on Social Media:                               Sign Up to like & get
recommendations!

Related content

More Information              News              Social Media              Video              Recommended



                Click one of the above tabs to view related content.