Model breaking measure for cosmological surveys

  • View
    215

  • Download
    3

Embed Size (px)

Text of Model breaking measure for cosmological surveys

  • Model breaking measure for cosmological surveys

    Adam Amara* and Alexandre Refregier

    Department of Physics, Institute for Astronomy, ETH Zurich, Wolfgang-Pauli-Strasse 27,CH-8093 Zurich, Switzerland

    (Received 22 September 2013; published 1 April 2014)

    Recent observations have led to the establishment of the concordance CDM model for cosmology. Anumber of experiments are being planned to shed light on dark energy, dark matter, inflation and gravity,which are the key components of the model. To optimize and compare the reach of these surveys, severalfigures of merit have been proposed. They are based on either the forecasted precision on the CDMmodeland its expansion or on the expected ability to distinguish two models. We propose here another figure ofmerit that quantifies the capacity of future surveys to rule out the CDM model. It is based on a measureof the difference in volume of observable space that the future surveys will constrain with and withoutimposing the model. This model breaking figure of merit is easy to compute and can lead to different surveyoptimizations than other metrics. We illustrate its impact using a simple combination of supernovaeand baryon acoustic oscillation mock observations and compare the respective merit of these probes tochallenge CDM. We discuss how this approach would impact the design of future cosmologicalexperiments.

    DOI: 10.1103/PhysRevD.89.083501 PACS numbers: 98.80.Es

    I. INTRODUCTION

    Recent progress in cosmological observations have ledto the establishment of the CDM model as the standardmodel for cosmology. This simple model is able to fit awide array of observations with about six parameters [13].In spite of its success, several key ingredients of the modelare not fully understood and have been introduced to fit thedata rather than being derived from fundamental theory.These include dark matter [4], which (if attributed toparticles) exists outside the standard model of particlephysics and dark energy [5,6]. The other ingredients of themodel are associated with inflation, which conditions theinitial state of the Universe, and Einstein gravity, which hasnot been tested on cosmological scales.Alternatives to the CDM model are numerous and

    growing. However since the data is currently consistentwith the CDM model, progress in the field will likely bedriven by the acquisition of new data that can be used tofurther challenge the model. In so doing, we hope to findevidence that will lead to a deeper understanding ofphysical processes and point us towards more fundamentalalternative models. Significant amounts of current effortsin cosmology are thus focused on the design of futureexperiments that can optimally increase our cosmologicalknowledge. However, since there exists a wide array ofequally compelling alternative models, finding a suitablemetric with which to compare and optimize future experi-ments is challenging.

    At present, the dominant metric for gauging the qualityof planned experiments is the dark energy task force(DETF) figure of merit (FoM) [7]. This metric consistsof expanding the simplest CDM model so that the darkenergy component is modeled as having an equation ofstate w, which is given by the ratio of pressure to density ofdark energy. This equation is assumed to evolve linearlywith scale factor a, wa w0 1 awa [8,9]. TheDETF FoM can then be derived from the determinant ofthe covariance matrix of the two dark energy parametersw0 and wa, which can be calculated using Fisher matrixmethods [10]. Since the linear expansion of the equationof state is only one of many possible extensions beyondCDM, relying solely on this optimization may lead tobiases in experiment design.An alternative approach, which was proposed by the

    follow-up committee known as the DETF FoM WorkingGroup [11], is to consider a more general expression for theequation of state. This approach relies on principle com-ponent analysis (PCA) methods to find the fundamentalmodes that a given experiment can measure. In their report,the DETF FoM Working Group suggests a prescriptionwhere the equation of state is divided into 36 redshift binsout to z 10. One difficulty, however, is that Fisher matrixcalculations can be unstable. The final results, therefore,can depend on the users choice of initial basis set, whichonce again may not be well motivated and can lead tounintended selection biases [12]. The DETF FoMWorkingGroup also advocates to use alternative theoretical expan-sions that can be used to model possible deviations ofgravity from Einsteins theory [13,14].Numerous alternative metrics have been proposed in

    the literature. As well as further PCA based techniques*adam.amara@phys.ethz.chalexandre.refregier@phys.ethz.ch

    PHYSICAL REVIEW D 89, 083501 (2014)

    1550-7998=2014=89(8)=083501(7) 083501-1 2014 American Physical Society

    http://dx.doi.org/10.1103/PhysRevD.89.083501http://dx.doi.org/10.1103/PhysRevD.89.083501http://dx.doi.org/10.1103/PhysRevD.89.083501http://dx.doi.org/10.1103/PhysRevD.89.083501

  • [1518], other methods have been proposed that includein the determinant calculations parameters of CDMbeyond those of the equation of state. These include theintegrated parameter space optimization (IPSO) [1921],and model selection methods based on forecasting theBayes factor [2226]. The latter approach relies on com-paring two models and calculating the Bayes factor[B01 pdjM0=pdjM1], which quantifies the odds ofwhich model (M0 or M1) is preferred by the data (d). Thismethod still requires a choice of an alternative model towhich the null model can be compared.The end result can vary, depending on the FoM used.

    This is ultimately due to the fact that the FoMs are beingused to ask subtly different questions. In an era where thetotal amount of data is growing, it is conceivable and fullyexpected that different FoMs will lead to similar optimi-zation. However, as experiments begin to fill the entireavailable cosmic volume, the trade-offs are likely tobecome more subtle. Hence, care should be given to focusprecisely on the questions that we want to address.In this paper, we explore the motivational question:

    Which experiment is most likely to find data that will falsifyCDM? Given the success of CDM so far, the detectionof any deviation from this model would be a majordiscovery. These deviations may not necessarily emergeas a deviation from w 1. As a result, to answer themotivational question above we formulate a new figure ofmerit, building on earlier work [27], which can be readilycalculated using Gaussian approximations. In its purestform this figure of merit can be calculated using only(i) current data, (ii) the predictions from the simple CDMmodel that we wish to challenge and (iii) the expectedcovariance matrix of the data for a future experiment. Aspart of our work, we also show how robust theoreticalpriors, such as light propagation on a metric, can also beincluded in the calculation, if so desired. While the DETFFOM and the Bayes ratio approach are, respectively, relatedto model fitting and model selection, our approach isrelated to the problem of model testing.This paper is organized as follows. In Sec. II, we derive

    our new FoM and show the Gaussian approximationversion of the calculation. In Sec. III, we investigate asimple cosmological toy-model example to illustrate ourmethod. In this section we also compare our calculations toan FoM derived from the determinant of the Fisher matrixof the standard CDM parameters. Finally, in Sec. IV weoffer a discussion to summarize our findings.

    II. FORMALISM

    The basic principle of our approach is to make compar-isons between the likely outcomes of future experiments indata space. In its purest form, this is a comparison betweenpDfjDc and pDfjDc;, where the former is theprobability of future data Df, given only current data Dcand the latter is the probability of future data given current

    data and the constraint that the standard model (with para-meters) being studied (in our case standard CDM) musthold. In this empirical case, we can calculate the probabilityof future data by integrating over all possible values of thedata (see derivation in the Appendix) such that

    pDfjDc Z

    pDfjTpTjDcdT; (1)

    where we have introduced the concept of true value Tthat corresponds to the value we obtain as the errors tend tozero. For the case where we assume a standard model holds,we can calculate the probability of future data by integrat-ing over all possible values of the model parameters, ,

    pDfjDc; Z

    pDfjpjDcd: (2)

    In both cases, we can calculate the probabilities of theunderlying variables given todays data. For instance, in thecase of the model parameters,

    pjDc pDcjp

    pDc: (3)

    Given two density distributions [for instance Eqs. (1) and(2)] we will need to be able to quantitatively compare them.For this, the concept of information entropy, which quan-tifies the level of uncertainty, is useful. A robust measurefor this purpose is the relative entropy, also known as theKullback-Leibler (KL) divergence [28], between the twodistributions. In this case, this can be calculated as

    KLp; q Z

    ln

    pxqx

    pxdx; (4)

    where px and qx are the two probability distributionsto be compared. This measure quantifies the difference ofinformation in the two cases and provides a measure of thedifference between the two distributions.Using this measure our proposed figure of merit measure

    for model breaking is simply

    KLpDfjDc;; pDfjDc: (5)

    A. The Gaussian case

    The analysis outlined above is general and can be used tostudy probability distribution functions of arbitrary shape.However, due to the their simplicity, probability distribu-tion functions (PDFs) that are multivariate Gaussians arevery attractive cases to study. In this case, the probabilitiesw