Significance We propose a strategy for building prior distributions that stabilize the estimation of complex “working models” when sample sizes are too small for standard statistical analysis. The stabilization is… Click to show full abstract
Significance We propose a strategy for building prior distributions that stabilize the estimation of complex “working models” when sample sizes are too small for standard statistical analysis. The stabilization is achieved by supplementing the observed data with a small amount of synthetic data generated from the predictive distribution of a simpler model. This class of prior distributions is easy to use and allows direct statistical interpretation. A catalytic prior distribution is designed to stabilize a high-dimensional “working model” by shrinking it toward a “simplified model.” The shrinkage is achieved by supplementing the observed data with a small amount of “synthetic data” generated from a predictive distribution under the simpler model. We apply this framework to generalized linear models, where we propose various strategies for the specification of a tuning parameter governing the degree of shrinkage and study resultant theoretical properties. In simulations, the resulting posterior estimation using such a catalytic prior outperforms maximum likelihood estimation from the working model and is generally comparable with or superior to existing competitive methods in terms of frequentist prediction accuracy of point estimation and coverage accuracy of interval estimation. The catalytic priors have simple interpretations and are easy to formulate.
               
Click one of the above tabs to view related content.