Katelin Schutz Theoretical Cosmologist & Foodie

With Daniele Bertollini, Mikhail Solon, Jonathan Walsh, and Kathryn Zurek

This paper (with more general details and applications to inflation in this companion paper) was about trying to compute the covariance of the power spectrum deeper into the nonlinear regime than was previously possible... Why should anyone care about that? Well, let me tell you.


We want to extract as much information as possible from weak lensing surveys and galaxy surveys because this will better constrain cosmological parameters that tell you about exotic physics. For example, one can use information from these surveys to measure the neutrino mass or figure out what dark energy is made of.

In order to extract as much information as possible from these surveys to learn about new physics, we need to understand the power spectrum, which tells you how much structure there is on a given scale. We also need to know the covariance of the power spectrum, which tells you about how power spectra on different length scales are correlated to each other. This is a fundamental source of uncertainty in galaxy surveys etc and it also tells you about what additional information new density modes bring to the table. 


Previously, the covariance of the power spectrum was very hard to compute. We sort of have a handle on how to calculate the covariance on extremely large scales, but for smaller, more nonlinear scales we don't fare as well. Some folks tried to use numerical N-body simulations of structure formation (basically you put in N particles that interact via gravity into a computer and say "go"-- kinda like playing god) to understand small scales better, but that was hard because a large number of simulations were required for statistical convergence and that gets very computationally laborious. Others tried to build models or use approximations for what was happening on small scales; this was unsatisfying because it's hard to quantify the errors of these methods and it's not clear why they work or how robust they really are. Sometimes these models don't even respect the symmetries that have to be present in the theory, like Galilean invariance. And sometimes calculations using these methods include effects from a scale well-beyond where they're known to break down, which is very untrustworthy. It's a huge bummer that these methods don't work out as well as one might hope, since a lot of the information in e.g. galaxy surveys is below the length scale where these models aren't necessarily reliable. 


Enter your new best friend: the Effective Field Theory (EFT) of Large Scale Structure (LSS). The general EFT technique is very common in particle physics and condensed matter physics, and it's a way of systematically coarse-graining your understanding of the underlying physics. In this case, we know the underlying physics of what forms structures on this scale: gravity. However, it's analytically impossible to keep track of a whole universe's worth of gravitating particles. So the EFT of LSS captures non-linear effects coming from messy gravitational interactions on small scales (which are still pretty big, we're talking bigger than a galaxy cluster) and it does this by parametrizing the stuff in our universe as an imperfect fluid. For example, this procedure introduces an effective speed of sound, which one has to measure by fitting to data. So the perks of the EFT are that you don't have some of the other problems that previous analytic techniques had and also that you have a systematic, well-controlled way of improving the precision of your calculation (although it might cause you to groan to have to do a calculation to higher precision!) The downside of the EFT is that you have to measure the effective fluid parameters from data, for example from a simulation; one might start to worry about overfitting making the theory look better than it really should be. (For example, they say it takes 6 parameters to make an elephant and 1 extra parameter to make him wiggle his tail.)


Anyway, we used the EFT of LSS to compute the covariance of the power spectrum. The result? We found that it improves over the usual approach (as expected) by fitting for just one free parameter, since some of the parameters in the theory can be measured from e.g. the power spectrum itself and one doesn't have to re-measure. (That's the beauty of a predictive theory! Yay for the scientific method!) We are now able to use this method to understand the covariance down to "small" scales around 100 million light years (which is around 3x bigger than some of the biggest galaxy clusters around.) This is several times smaller in length than previous calculations done with other techniques. While a factor of just a few improvement in length might not sound impressive, consider that the number of density modes in a survey (and hence the amount of information) scales like the volume (i.e. the length cubed). This means that using our technique, one can access far more information about exotic physics than was previously possible. 


There are some small snags about this procedure, however. One of the hardest things about this project was measuring our EFT parameter because the N-body data we fit to was pretty noisy on large scales. Also, large underlying systematics of these simulations might have affected our parameter fits.  This points to a need for better EFT parameter extraction methods and for improved understanding of the covariance in N-body simulations on the largest scales. We'll see how that goes... And it will take some more work before this EFT can be used for practical purposes i.e. in analyzing data from surveys. But at least this is a step in the right direction and it gets the ball rolling for future work... Fingers crossed!

The Non-Gaussian Covariance of the Matter Power Spectrum in the Effective Field Theory of Large Scale Structure