Particle filters (PFs) constitute a sequential data assimilation method based on the Monte Carlo approximation of Bayesian estimation theory. Standard PFs use scalar weights derived from the likelihood of the… Click to show full abstract
Particle filters (PFs) constitute a sequential data assimilation method based on the Monte Carlo approximation of Bayesian estimation theory. Standard PFs use scalar weights derived from the likelihood of the approximate posterior probability density functions (PDFs) of observations and use resampling schemes to generate posterior particles. However, the scalar weights approach interferes with the localization algorithm and often results in filter degeneracy. Recently, a localized particle filter (LPF) was developed by extending the scalar weights of PF to vector weights, which produces various (local) posterior PDFs for different model grids and variables. With a sampling and merging approach in the resampling, an LPF can effectively solve the filter degeneracy problem and offer a practical, efficient algorithm for localization. However, this algorithm assumes the variations in the weights of a state variable of neighbouring grids to be continuous and uses a spatially linear interpolation of PF weights to determine the local weights. In this paper, we first analyse the possible concerns associated with the linear continuity of PF weights. This assumption is found to challenge the theoretical properties of nonlinear and non-Gaussian variations in weights and alleviate the intrinsic spatial variations of PF weights. On this basis, we propose a new algorithm to produce vector weights for PFs for neighbouring grids. Numerical experiments using the Lorenz ’96 model show that our new localized particle filter performs better than the existing LPF algorithm, indicating the advantages and potential applications of this new algorithm of vector weights in the field of data assimilation.
               
Click one of the above tabs to view related content.