Computation of normalizing constants is a fundamental mathematical problem in various disciplines, particularly in Bayesian model selection problems. A sampling-based technique known as bridge sampling (Meng and Wong in Stat… Click to show full abstract
Computation of normalizing constants is a fundamental mathematical problem in various disciplines, particularly in Bayesian model selection problems. A sampling-based technique known as bridge sampling (Meng and Wong in Stat Sin 6(4):831–860, 1996) has been found to produce accurate estimates of normalizing constants and is shown to possess good asymptotic properties. For small to moderate sample sizes (as in situations with limited computational resources), we demonstrate that the (optimal) bridge sampler produces biased estimates. Specifically, when one density (we denote as $$p_2$$ p 2 ) is constructed to be close to the target density (we denote as $$p_1$$ p 1 ) using method of moments, our simulation-based results indicate that the correlation-induced bias through the moment-matching procedure is non-negligible. More crucially, the bias amplifies as the dimensionality of the problem increases. Thus, a series of theoretical as well as empirical investigations is carried out to identify the nature and origin of the bias. We then examine the effect of sample size allocation on the accuracy of bridge sampling estimates and discovered that one possibility of reducing both the bias and standard error with a small increase in computational effort is by drawing extra samples from the moment-matched density $$p_2$$ p 2 (which we assume easy to sample from), provided that the evaluation of $$p_1$$ p 1 is not too expensive. We proceed to show how the simple adaptive approach we termed “splitting” manages to alleviate the correlation-induced bias at the expense of a higher standard error, irrespective of the dimensionality involved. We also slightly modified the strategy suggested by Wang et al. (Warp bridge sampling: the next generation, Preprint, 2019. arXiv:1609.07690 ) to address the issue of the increase in standard error due to splitting, which is later generalized to further improve the efficiency. We conclude the paper by offering our insights of the application of a combination of these adaptive methods to improve the accuracy of bridge sampling estimates in Bayesian applications (where posterior samples are typically expensive to generate) based on the preceding investigations, with an application to a practical example.
               
Click one of the above tabs to view related content.