The observable universe contains density perturbations on scales larger than any finite volume survey. Perturbations on scales larger than a survey can measure degrade its power to constrain cosmological parameters.… Click to show full abstract
The observable universe contains density perturbations on scales larger than any finite volume survey. Perturbations on scales larger than a survey can measure degrade its power to constrain cosmological parameters. The dependence of survey observables such as the weak lensing power spectrum on these long-wavelength modes results in super-sample covariance. Accurately forecasting parameter constraints for future surveys requires accurately accounting for the super-sample effects. If super-sample covariance is in fact a major component of the survey error budget, it may be necessary to investigate mitigation strategies that constrain the specific realization of the long-wavelength modes. We present a Fisher matrix based formalism for approximating the magnitude of super-sample covariance and the effectiveness of mitigation strategies for realistic survey geometries. We implement our formalism in the public code SuperSCRAM: Super-Sample Covariance Reduction and Mitigation. We illustrate SuperSCRAM with an example application, where the modes contributing to super-sample covariance in the WFIRST weak lensing survey are constrained by the low-redshift galaxy number counts in the wider LSST footprint. We find that super-sample covariance increases the volume of the error ellipsoid in 7D cosmological parameter space by a factor of 4.5 relative to Gaussian statistical errors only, but our simple mitigation strategy more than halves the contamination, to a factor of 2.0.
               
Click one of the above tabs to view related content.