Transcript
Page 1: Prior Selection for Vector Autoregressions · 2012-10-16 · Prior Selection for Vector Autoregressions Domenico Giannone, Michele Lenza, and Giorgio E. Primiceri NBER Working Paper

NBER WORKING PAPER SERIES

PRIOR SELECTION FOR VECTOR AUTOREGRESSIONS

Domenico GiannoneMichele Lenza

Giorgio E. Primiceri

Working Paper 18467http://www.nber.org/papers/w18467

NATIONAL BUREAU OF ECONOMIC RESEARCH1050 Massachusetts Avenue

Cambridge, MA 02138October 2012

We thank Liseo Brunero, Guenter Coenen, Gernot Doppelhofer, Raffaella Giacomini, Dimitris Korobilis,Frank Schorfheide, Chris Sims and participants in several conferences and seminars for commentsand suggestions. Domenico Giannone is grateful to the Actions de Recherche Concertées (contractARC-AUWB/2010-15/ULB-11) and Giorgio Primiceri to the Alfred P. Sloan Foundation for financialsupport. The views expressed in this paper are those of the authors and do not necessarily reflect thoseof the Eurosystem. The views expressed herein are those of the authors and do not necessarily reflectthe views of the National Bureau of Economic Research.

At least one co-author has disclosed a financial relationship of potential relevance for this research.Further information is available online at http://www.nber.org/papers/w18467.ack

NBER working papers are circulated for discussion and comment purposes. They have not been peer-reviewed or been subject to the review by the NBER Board of Directors that accompanies officialNBER publications.

© 2012 by Domenico Giannone, Michele Lenza, and Giorgio E. Primiceri. All rights reserved. Shortsections of text, not to exceed two paragraphs, may be quoted without explicit permission providedthat full credit, including © notice, is given to the source.

Page 2: Prior Selection for Vector Autoregressions · 2012-10-16 · Prior Selection for Vector Autoregressions Domenico Giannone, Michele Lenza, and Giorgio E. Primiceri NBER Working Paper

Prior Selection for Vector AutoregressionsDomenico Giannone, Michele Lenza, and Giorgio E. PrimiceriNBER Working Paper No. 18467October 2012JEL No. C11,C32,C53,E37,E47

ABSTRACT

Vector autoregressions (VARs) are flexible time series models that can capture complex dynamicinterrelationships among macroeconomic variables. However, their dense parameterization leads tounstable inference and inaccurate out-of-sample forecasts, particularly for models with many variables.A solution to this problem is to use informative priors, in order to shrink the richly parameterized unrestrictedmodel towards a parsimonious naive benchmark, and thus reduce estimation uncertainty. This paperstudies the optimal choice of the informativeness of these priors, which we treat as additional parameters,in the spirit of hierarchical modeling. This approach is theoretically grounded, easy to implement,and greatly reduces the number and importance of subjective choices in the setting of the prior. Moreover,it performs very well both in terms of out-of-sample forecasting—as well as factor models—and accuracyin the estimation of impulse response functions.

Domenico GiannoneECARES - Université Libre de BruxellesAvenue F. D. Roosevelt, 501050 [email protected]

Michele LenzaEuropean Central BankKaiserstrasse 2960311 Frankfurt An [email protected]

Giorgio E. PrimiceriDepartment of EconomicsNorthwestern University318 Andersen Hall2001 Sheridan RoadEvanston, IL 60208-2600and [email protected]

Page 3: Prior Selection for Vector Autoregressions · 2012-10-16 · Prior Selection for Vector Autoregressions Domenico Giannone, Michele Lenza, and Giorgio E. Primiceri NBER Working Paper

Prior Selection for Vector Autoregressions∗

Domenico Giannone

Universite Libre de Bruxelles and CEPR

Michele Lenza

European Central Bank

Giorgio E. Primiceri

Northwestern University, CEPR and NBER

First Version: March 2010This Version: September 2012

Abstract

Vector autoregressions (VARs) are flexible time series models that can capturecomplex dynamic interrelationships among macroeconomic variables. However,their dense parameterization leads to unstable inference and inaccurate out-of-sample forecasts, particularly for models with many variables. A solution to thisproblem is to use informative priors, in order to shrink the richly parameterizedunrestricted model towards a parsimonious naıve benchmark, and thus reduce es-timation uncertainty. This paper studies the optimal choice of the informativenessof these priors, which we treat as additional parameters, in the spirit of hierarchicalmodeling. This approach is theoretically grounded, easy to implement, and greatlyreduces the number and importance of subjective choices in the setting of the prior.Moreover, it performs very well both in terms of out-of-sample forecasting—as wellas factor models—and accuracy in the estimation of impulse response functions.

1 Introduction

In this paper, we study the choice of the informativeness of the prior distribution onthe coefficients of the following VAR model:

yt = C +B1yt−1 + ...+Bpyt−p + εt (1.1)

εt ∼ N (0,Σ) ,

where yt is an n× 1 vector of endogenous variables, εt is an n× 1 vector of exogenousshocks, and C, B1,..., Bp and Σ are matrices of suitable dimensions containing themodel’s unknown parameters.

∗We thank Liseo Brunero, Guenter Coenen, Gernot Doppelhofer, Raffaella Giacomini, DimitrisKorobilis, Frank Schorfheide, Chris Sims and participants in several conferences and seminars forcomments and suggestions. Domenico Giannone is grateful to the Actions de Recherche Concertes(contract ARC-AUWB/2010-15/ULB-11) and Giorgio Primiceri to the Alfred P. Sloan Foundation forfinancial support. The views expressed in this paper are those of the authors and do not necessarilyreflect those of the Eurosystem.

1

Page 4: Prior Selection for Vector Autoregressions · 2012-10-16 · Prior Selection for Vector Autoregressions Domenico Giannone, Michele Lenza, and Giorgio E. Primiceri NBER Working Paper

With flat priors and conditioning on the initial p observations, the posterior dis-

tribution of β ≡ vec(

[C,B1, ..., Bp]′)

is centered at the Ordinary Least Square (OLS)

estimate of the coefficients and it is easy to compute. It is well known, however, thatworking with flat priors leads to inadmissible estimators (Stein, 1956) and yields poorinference, particularly in large dimensional systems (see, for example, Sims, 1980; Lit-terman, 1986). One typical symptom of this problem is the fact that these modelsgenerate inaccurate out-of-sample predictions, due to the large estimation uncertaintyof the parameters.

In order to improve the forecasting performance of VAR models, Litterman (1980)and Doan, Litterman, and Sims (1984) have proposed to combine the likelihood func-tion with some informative prior distributions. Using the frequentist terminology, thesepriors are successful because they effectively reduce the estimation error, while gener-ating only relatively small biases in the estimates of the parameters. For a more formalillustration of this point from a Bayesian perspective, let’s consider the following (con-ditional) prior distribution for the VAR coefficients

β|Σ ∼ N (b,Σ ⊗ Ωξ) ,

where the vector b and the matrix Ω are known, and ξ is a scalar parameter controllingthe tightness of the prior information. The conditional posterior of β can be obtainedby multiplying this prior by the likelihood function. Taking the initial p observationsof the sample as given—a standard assumption that we maintain through the entirepaper, without explicitly conditioning on these observations—the posterior takes theform

β|Σ, y ∼ N(

β (ξ) , V (ξ))

β (ξ) ≡ vec(

B (ξ))

B (ξ) ≡(

x′x+ (Ωξ)−1)−1 (

x′y + (Ωξ)−1 b)

V (ξ) ≡ Σ ⊗(

x′x+ (Ωξ)−1)−1

,

where y ≡ [yp+1, ..., yT ]′, x ≡ [xp+1, ..., xT ]′, xt ≡ [1, y′t−1, ..., y′t−p]

′, and b is a matrixobtained by reshaping the vector b in such a way that each column corresponds to the

prior mean of the coefficients of each equation (i.e. b ≡ vec(

b)

). Notice that, if we

choose a lower ξ, the prior becomes more informative, the posterior mean of β movestowards the prior mean, and the posterior variance falls.

In this context, one natural way to assess the impact of different priors on themodel’s ability to fit the data is to evaluate their effect on the model’s out-of-sampleforecasting performance, summarized by the probability of observing low forecast errors.To this end, rewrite (1.1) as

yt = Xtβ + εt,

where Xt ≡ In⊗x′t and In denotes an n×n identity matrix. At time T , the distribution

of the one-step-ahead forecast is given by

yT+1|Σ, y ∼ N(

XT+1β (ξ) , XT+1V (ξ)X ′T+1 + Σ

)

,

2

Page 5: Prior Selection for Vector Autoregressions · 2012-10-16 · Prior Selection for Vector Autoregressions Domenico Giannone, Michele Lenza, and Giorgio E. Primiceri NBER Working Paper

whose variance depends both on the posterior variance of the coefficients and the volatil-ity of the innovations. It is then easy to see that neither very high nor very low valuesof ξ are likely to be ideal. On the one hand, if ξ is too low and the prior very dogmatic,density forecasts will be very concentrated around XT+1b. This results in a low proba-bility of observing small forecast errors, unless the prior mean happens to be in a closeneighborhood of the likelihood peak (and there is no reason to believe that this is thecase, in general). On the other hand, if ξ is too high and the prior too uninformative, themodel generates very dispersed density forecasts, especially in high-dimensional VARs,because of high estimation uncertainty. This also lowers the probability of observingsmall forecast errors, despite the fact that the distance between yT+1 and XT+1β (ξ)might be small. In sum, neither flat nor dogmatic priors maximize the fit of the model,which makes the choice of the informativeness of the prior distribution a crucial issue.

The literature has proposed a number of heuristic methodologies to set the infor-mativeness of the prior distribution on the VAR coefficients. For example, Litterman(1980) and Doan, Litterman, and Sims (1984) set the tightness of the prior by maximiz-ing the out-of-sample forecasting performance of the model over a pre-sample. Banbura,Giannone, and Reichlin (2010) propose instead to control for over-fitting by choosingthe shrinkage parameters that yield a desired in-sample fit.1

From a purely Bayesian perspective, however, the choice of the informativeness ofthe prior distribution is conceptually identical to the inference on any other unknownparameter of the model. Suppose, for instance, that a model is described by a likelihoodfunction p (y|θ) and a prior distribution pγ (θ), where θ is the vector of the model’sparameters and γ collects the hyperparameters, i.e. those coefficients that parameterizethe prior distribution, but do not directly affect the likelihood.2 It is then natural tochoose these hyperparameters by interpreting the model as a hierarchical model, i.e.replacing pγ (θ) with p (θ|γ), and evaluating their posterior (Berger, 1985; Koop, 2003).Such a posterior can be obtained by applying Bayes’ law, which yields

p (γ|y) ∝ p (y|γ) · p (γ) ,

where p (γ) denotes the prior density on the hyperparameters—also known as thehyperprior—while p (y|γ) is the so called marginal likelihood (ML), and corresponds to

p (y|γ) =

p (y|θ, γ) p (θ|γ) dθ. (1.2)

In other words, the ML is the density of the data as a function of the hyperparame-ters γ, obtained after integrating out the uncertainty about the model’s parameters θ.Conveniently, in the case of VARs with conjugate priors, the ML is available in closedform.

1A number of papers have subsequently followed either the first (e.g. Robertson and Tallman,1999; Wright, 2009; Giannone, Lenza, Momferatou, and Onorante, 2010) or the second strategy (e.g.Giannone, Lenza, and Reichlin, 2008; Bloor and Matheson, 2009; Carriero, Kapetanios, and Marcellino,2009; Koop, 2011).

2The distinction between parameters and hyperparameters is mostly fictitious and made only forconvenience.

3

Page 6: Prior Selection for Vector Autoregressions · 2012-10-16 · Prior Selection for Vector Autoregressions Domenico Giannone, Michele Lenza, and Giorgio E. Primiceri NBER Working Paper

Conducting formal inference on the hyperparameters is theoretically grounded andhas also several appealing interpretations. For example, with a flat hyperprior, theshape of the posterior of the hyperparameters coincides with the ML, which is a measureof out-of-sample forecasting performance of a model (see Geweke, 2001; Geweke andWhiteman, 2006). More specifically, the ML corresponds to the probability densitythat the model generates zero forecast errors, which can be seen by rewriting the MLas a product of conditional densities:

p (y|γ) =T∏

t=p+1

p(

yt|yt−1, γ

)

.

As a consequence, maximizing the posterior of the hyperparameters corresponds tomaximizing the one-step-ahead out-of-sample forecasting ability of the model.

Moreover, the strategy of estimating hyperparameters by maximizing the ML (i.e.their posterior under a flat hyperprior) is an Empirical Bayes method (Robbins, 1956),which has a clear frequentist interpretation. On the other hand, the full posteriorevaluation of the hyperparameters (as advocated, for example, by Lopes, Moreira, andSchmidt, 1999, for VARs) can be thought of as conducting Bayesian inference on thepopulation parameters of a random effects model or, more generally, of a hierarchicalmodel (see, for instance, Gelman, Carlin, Stern, and Rubin, 2004).

Finally, the hierarchical structure also implies that the unconditional prior for theparameters θ has a mixed distribution

p (θ) =

p (θ|γ) p (γ) dγ.

Mixed distributions have generally fatter tails than each of the component distributionsp (θ|γ), a property that robustifies inference. In fact, when the prior has fatter tailsthan the likelihood, the posterior is less sensitive to extreme discrepancies betweenprior and likelihood (Berger, 1985; Berger and Berliner, 1986).

1.1 Contribution

In this paper, we adopt the hierarchical modeling approach to make inference about theinformativeness of the prior distribution of Bayesian Vector Autoregressions (BVARs)estimated on postwar U.S. macroeconomic data. We consider a combination of theconjugate priors most commonly used in the literature (the “Minnesota,” “sum-of-coefficients” and “dummy-initial-observation” priors), and document that this estima-tion strategy generates very accurate out-of-sample predictions, both in terms of pointand density forecasts. The key to success lies in the fact that this procedure auto-matically selects the “appropriate” amount of shrinkage, namely tighter priors whenthe model involves many unknown coefficients relative to the available data, and looserpriors in the opposite case. Indeed, we derive an expression for the ML showing thatit takes duly into account the trade-off between in-sample fit and model complexity.

Because of this feature, the hierarchical BVAR improves over naıve benchmarks andflat-prior VARs, even for small-scale models, for which the optimal shrinkage is low,

4

Page 7: Prior Selection for Vector Autoregressions · 2012-10-16 · Prior Selection for Vector Autoregressions Domenico Giannone, Michele Lenza, and Giorgio E. Primiceri NBER Working Paper

but not zero. In addition, the hierarchical BVAR outperforms the most popular ad-hoc procedures to select hyperparameters (see Litterman, 1980; Banbura, Giannone,and Reichlin, 2010). Finally, we find that the forecasting performance of the modeltypically improves as we include more variables, and it is comparable to that of factormodels. This is remarkable because the latter are among the most successful forecastingmethods in the literature.

Our second contribution is documenting that this hierarchical BVAR approach per-forms very well also in terms of accuracy of the estimation of impulse response functionsin identified VARs. We conduct two experiments to make this point. First, we studythe transmission of an exogenous increase in the federal funds rate in a large-scalemodel with 22 variables. The estimates of the impulse responses that we obtain arebroadly in line with the usual narrative of the effects of an exogenous tightening inmonetary policy. This finding, together with the result that the same large-scale modelproduces good forecasts, indicates that our approach is able to effectively deal with thecurse of dimensionality. However, in this empirical exercise there is no way of formallychecking the accuracy of the estimated impulse response functions, since we do not havea directly observable counterpart of these objects in the data. Therefore, we conducta second exercise, which is a controlled Monte Carlo experiment. Namely, we simu-late data from a micro-founded, medium-scale, dynamic stochastic general equilibriummodel estimated on U.S. postwar data. We then use the simulated data to estimateour hierarchical BVAR, and compare the implied impulse responses to monetary pol-icy shocks to those of the true data generating process. This experiment lends strongsupport to our model. The surprising finding is in fact that the hierarchical Bayesianprocedure generates very little bias, while drastically increasing the efficiency of theimpulse response estimates relative to standard flat-prior VARs.

1.2 Related literature

Hierarchical modeling (or Empirical Bayes, i.e. its frequentist version) has been suc-cessfully adopted in many fields (see Berger, 1985; Gelman, Carlin, Stern, and Rubin,2004, for an overview). It has also been advocated by the first proponents of BVARs(see Doan, Litterman and Sims, 1984, Sims and Zha, 1998, and, more recently, Canova,2007 and Del Negro and Schorfheide, 2012), but seldom formally implemented in thiscontext. Exceptions to this statement include Del Negro and Schorfheide (2004) andDel Negro, Schorfheide, Smets, and Wouters (2007), who use the ML to choose thetightness of a prior for VARs derived from the posterior density of a dynamic stochas-tic general equilibrium model. In the context of time-varying VARs, the ML has beenused by Primiceri (2005) and Belmonte, Koop, and Korobilis (2011) to choose the infor-mativeness of the prior distribution for the time variation of coefficients and volatilities.Relative to these authors, our focus is on BVARs with standard conjugate priors, forwhich the posterior of the hyperparameters is available in closed form.

Closer to our framework, Phillips (1995) chooses the hyperparameters of the Min-nesota prior for VARs using the asymptotic posterior odds criterion of Phillips andPloberger (1994), which is also related to the ML. Del Negro and Schorfheide (2004,2011), Carriero, Kapetanios, and Marcellino (2010) and Carriero, Clark, and Mar-

5

Page 8: Prior Selection for Vector Autoregressions · 2012-10-16 · Prior Selection for Vector Autoregressions Domenico Giannone, Michele Lenza, and Giorgio E. Primiceri NBER Working Paper

cellino (2011) have used the ML to select the variance of a Minnesota prior from a gridof possible values. We generalize this approach to the optimal selection of a varietyof commonly adopted prior distributions for BVARs. This includes the prior on thesum of coefficients proposed by Doan, Litterman, and Sims (1984), which turns outto be crucial to enhance the forecasting performance of the model. Moreover, relativeto these studies, we take an explicit hierarchical modeling approach that allows us totake the uncertainty about hyperparameters into account, and to evaluate the densityforecasts of the model.

More important, we also complement the model’s forecasting evaluation with anassessment of the performance of hierarchical BVARs for impulse response estimation,which is new in the literature.

Finally, we document that our approach works well for models of very different scale,including 3-variable VARs and much larger-scale ones. In this respect, our work relatesto the growing literature on forecasting using factors extracted from large informationsets (see, for example, Forni, Hallin, Lippi, and Reichlin, 2000; Stock and Watson,2002b), Large Bayesian VARs (Banbura, Giannone, and Reichlin, 2010; Koop, 2011)and empirical Bayes regressions with large sets of predictors (Knox, Stock, and Watson,2000).

The rest of the paper is organized as follows. Section 2 and 3 provide some addi-tional details about the computation and interpretation of the ML, and the priors andhyperpriors used in our investigation. Section 4 and 5 focus instead on the empiricalapplication to macroeconomic forecasting and impulse response estimation. Section 6concludes.

2 The Choice of Hyperparameters for BVARs

In the previous section, we have argued that the most natural way of choosing thehyperparameters of a model is based on their posterior distribution. This posterior isproportional to the product of the hyperprior and the ML. The hyperprior is a “level-two” prior on the hyperparameters, while the ML is the likelihood of the observed dataas a function of the hyperparameters, which can be obtained by integrating out themodel’s coefficients, as in equation (1.2).

Although this procedure can be applied very generally, in this paper we restrict ourattention to prior distributions for VAR coefficients belonging to the following Normal-Inverse-Wishart family:

Σ ∼ IW (Ψ; d) (2.3)

β|Σ ∼ N (b,Σ ⊗ Ω) , (2.4)

where the elements Ψ, d, b and Ω are typically functions of a lower dimensional vectorof hyperparameters γ. We focus on these priors for two reasons. First of all, thisclass includes the priors most commonly used by the existing literature on BVARs (seethe surveys of Koop and Korobilis, 2010; Del Negro and Schorfheide, 2011; Karlsson,

6

Page 9: Prior Selection for Vector Autoregressions · 2012-10-16 · Prior Selection for Vector Autoregressions Domenico Giannone, Michele Lenza, and Giorgio E. Primiceri NBER Working Paper

2012).3 Second, the prior (2.3)-(2.4) is conjugate and has the advantage that the MLof the BVAR can be computed in closed form as a function of γ.

In appendix A, we prove that

p (y|γ) ∝

∣∣∣∣

(

V posteriorε

)−1V priorε

∣∣∣∣

T−p+d

2

︸ ︷︷ ︸

Fit

·T∏

t=p+1

∣∣∣Vt|t−1

∣∣∣

− 12

︸ ︷︷ ︸

Penalty for model complexity

, (2.5)

where V posteriorε and V prior

ε are the posterior and prior means (or modes) of the residualvariance, and Vt|t−1 ≡ EΣ

[var

(yt|y

t−1,Σ)]

is the variance (conditional on Σ) of theone-step-ahead forecast of y, averaged across all possible a-priori realizations of Σ.While exact closed-form expressions for these objects are provided in the appendix,here we stress that the ML consists of two crucial terms. The first term depends onthe in-sample fit of the model, and it increases when the posterior residual variancefalls relative to the prior variance. Thus, everything else equal, the ML criterion favorshyperparameter values that generate smaller residuals. The second term in (2.5) isinstead a penalty for model complexity. This term penalizes models with imprecise out-

of-sample forecasts due to either large a-priori residual variances or high uncertainty ofthe parameter estimates. These models have a higher a-priori chance of capturing anypossible behavior of the data, while, at the same time, assigning very low probabilityto all possible outcomes. This feature is the essence of overfitting and is penalized bythe ML criterion. Therefore, the ML captures the standard trade-off between model fitand complexity.

The fact that the ML is available in closed form simplifies inference substantially,because it makes it easy to either maximize or simulate the posterior of the hyperparam-eters. As we have pointed out in the introduction, the advantage of the approach basedon the maximization is that, under a flat hyperprior, it is an Empirical Bayes proce-dure and has a classical interpretation. It also coincides with selecting hyperparametersthat maximize the one-step-ahead out-of-sample forecasting performance of the model.On the other hand, the full posterior simulation allows to account for the estimationuncertainty of the hyperparameters, and has an interpretation of Bayesian hierarchi-cal modeling. This approach can be implemented using a simple Markov chain MonteCarlo algorithm. In particular, we use a Metropolis step to draw the low dimensionalvector of hyperparameters. Conditional on a value of γ, the VAR coefficients [β,Σ]can then be drawn from their posterior, which is Normal-Inverse-Wishart. AppendixB presents the details of this procedure.

We now turn to the empirical application of our methodology.

3 Priors and Hyperpriors

This section describes the specific priors that we employ in our empirical analysis.For the sake of comparability with previous studies, we choose the most popular prior

3Some recent studies have also proposed alternative priors for VARs that do not belong to thisfamily. See, for example, Del Negro and Schorfheide (2004), Villani (2009), Jarociski and Marcet(2010) and Koop (2011).

7

Page 10: Prior Selection for Vector Autoregressions · 2012-10-16 · Prior Selection for Vector Autoregressions Domenico Giannone, Michele Lenza, and Giorgio E. Primiceri NBER Working Paper

densities adopted by the existing literature for the estimation of BVARs in levels.However, it is important to stress that our method is not confined to these priors, but,as mentioned in the previous section, applies more generally to all priors belonging tothe class defined by (2.3) and (2.4).

As in Kadiyala and Karlsson (1997), we set the degrees of freedom of the Inverse-Wishart distribution to d = n + 2, which is the minimum value that guarantees theexistence of the prior mean of Σ (it is equal to Ψ/(d− n− 1)). In addition, we take Ψto be a diagonal matrix with an n × 1 vector ψ on the main diagonal. We treat ψ asan hyperparameter, which differs from the existing literature that has been fixing thisparameter using sample information. As for the conditional Gaussian prior for β, wecombine the following prior densities:

1. The baseline prior is a version of the so-called Minnesota prior, first introducedin Litterman (1979, 1980). This prior is centered on the assumption that eachvariable follows a random walk process, possibly with drift, which is a parsi-monious yet “reasonable approximation of the behavior of an economic vari-able”(Litterman, 1979, p. 20). More precisely, this prior is characterized bythe following first and second moments:

E[

(Bs)ij |Σ]

=

1 if i = j and s = 10 otherwise

cov(

(Bs)ij , (Br)hm |Σ)

=

λ2 1s2

Σih

ψj/(d−n−1) if m = j and r = s

0 otherwise,

and can be easily cast into the form of (2.4). Notice that the variance of this prioris lower for the coefficients associated with more distant lags, and that coefficientsassociated with the same variable and lag in different equations are allowed to becorrelated. Finally, the key hyperparameter is λ, which controls the scale of allthe variances and covariances, and effectively determines the overall tightness ofthis prior.

The literature following Litterman’s work has introduced refinements of the Minnesotaprior to further “favor unit roots and cointegration, which fits the beliefs reflectedin the practices of many applied macroeconomists” (Sims and Zha, 1998, p. 958).Loosely speaking, the objective of these additional priors is to reduce the impor-tance of the deterministic component implied by VARs estimated conditioning onthe initial observations (Sims, 1992a). This deterministic component is defined as

τt ≡ Ep(

yt|y1, ..., yp, β)

, i.e. the expectation of future y’s given the initial conditions

and the value of the estimated VAR coefficients. According to Sims (1992a), in un-restricted VARs, τt has a tendency to exhibit temporal heterogeneity—a markedlydifferent behavior at the beginning and the end of the sample—and to explain an im-plausibly high share of the variation of the variables over the sample. As a consequence,priors limiting the explanatory power of this deterministic component have been shownto improve the forecasting performance of BVARs.

8

Page 11: Prior Selection for Vector Autoregressions · 2012-10-16 · Prior Selection for Vector Autoregressions Domenico Giannone, Michele Lenza, and Giorgio E. Primiceri NBER Working Paper

2. The first prior of this type is known as “sum-of-coefficients” prior and was orig-inally proposed by Doan, Litterman, and Sims (1984). Following the litera-ture, it is implemented using Theil mixed estimation, with a set of n artificialobservations—one for each variable—stating that a no-change forecast is a goodforecast at the beginning of the sample. More precisely, we construct the followingset of dummy observations:

y+

n×n= diag

(y0

µ

)

x+

n×(1+np)=

[

0n×1

, y+, ..., y+]

,

where y0 is an n× 1 vector containing the average of the first p observations foreach variable, and the expression diag(v) denotes the diagonal matrix with thevector v on the main diagonal. These artificial observations are added on top ofthe data matrices y ≡ [yp+1, ..., yT ]′ and x ≡ [xp+1, ..., xT ]′, which are then usedfor inference. The prior implied by these dummy observations is centered at 1for the sum of coefficients on own lags for each variable, and at 0 for the sumof coefficients on other variables’ lags. It also introduces correlation among thecoefficients on each variable in each equation. The hyperparameter µ controlsthe variance of these prior beliefs: as µ → ∞ the prior becomes uninformative,while µ → 0 implies the presence of a unit root in each equation and rules outcointegration.

3. The fact that, in the limit, the sum-of-coefficients prior is not consistent withcointegration motivates the use of an additional prior that was introduced bySims (1993), known as “dummy-initial-observation” prior. It is implementedusing the following dummy observation

y++

1×n=

y′0δ

x++

1×(1+np)=

[1

δ, y++, ..., y++

]

,

which states that a no-change forecast for all variables is a good forecast at thebeginning of the sample. The hyperparameter δ controls the tightness of the priorimplied by this artificial observation. As δ → ∞ the prior becomes uninformative.On the other hand, as δ → 0, all the variables of the VAR are forced to be attheir unconditional mean, or the system is characterized by the presence of anunspecified number of unit roots without drift. As such, the dummy-initial-observation prior is consistent with cointegration.

Summing up, the setting of these priors depends on the hyperparameters λ, µ, δ andψ, which we treat as additional parameters. As hyperpriors for λ, µ and δ, we chooseGamma densities with mode equal to 0.2, 1 and 1—the values recommended by Simsand Zha (1998)—and standard deviations equal to 0.4, 1 and 1 respectively. Finally,

9

Page 12: Prior Selection for Vector Autoregressions · 2012-10-16 · Prior Selection for Vector Autoregressions Domenico Giannone, Michele Lenza, and Giorgio E. Primiceri NBER Working Paper

our prior on ψ/ (d− n− 1), i.e. the prior mean of the main diagonal of Σ, is an Inverse-Gamma with scale and shape equal to (0.02)2. This hyperprior peaks at approximately(0.02)2, since we use data in annualized log-terms. Moreover, it is proper, but quitedisperse since it does not have either a variance or a mean. We work with proper hyper-priors because they guarantee the properness of the posterior and, from a frequentistperspective, the admissibility of the estimator of the hyperparameters, which is a dif-ficult property to check for the case of hierarchical models (see Berger, Strawderman,and Dejung, 2005). Another appealing feature of non-flat hyperpriors is that they helpstabilize inference when the ML happens to have little curvature with respect to somehyperparameters. For example, we have noticed that this can sometimes occur forthe hyperparameters of the sum-of-coefficients or the dummy-initial-observation priorsin larger-scale models. This being said, we stress that our hyperpriors are relativelydiffuse, and our empirical results are confirmed when using completely flat, improperhyperpriors.

4 Forecasting Evaluation of BVAR Models

The assessment of the forecasting performance of econometric models has become stan-dard in macroeconomics, even when the main objective of the study is not to provideaccurate out-of-sample predictions. This is because the forecasting evaluation can bethought of as a model validation procedure. In fact, if model complexity is introducedwith a proliferation of parameters, instabilities due to estimation uncertainty mightcompletely offset the gains obtained by limiting model misspecification. Out-of-sampleforecasting reflects both parameter uncertainty and model misspecification and revealswhether the benefits due to flexibility are outweighed by the fact that the more generalmodel captures also non-prominent features of the data.

Our out-of-sample evaluation is based on the US dataset constructed by Stock andWatson (2008). We work with three different VAR models, including progressivelylarger sets of variables:4

1. A SMALL-scale model—the prototypical monetary VAR—with three variables,i.e. GDP, the GDP deflator and the federal funds rate.

2. A MEDIUM -scale model, which includes the variables used for the estimationof the DSGE model of Smets and Wouters (2007) for the US economy. In otherwords, we add consumption, investment, hours worked and wages to the smallmodel.

3. A LARGE -scale model, with 22 variables, using a dataset that nests the previ-ous two specifications and also includes a number of important additional labormarket, financial and monetary variables.

Further details on the database are reported in Table 1.

4The complete database in Stock and Watson (2008) includes 149 quarterly variables from 1959Q1to 2008Q4. Since several variables are monthly, we follow Stock and Watson (2008) and transformthem into quarterly by taking averages.

10

Page 13: Prior Selection for Vector Autoregressions · 2012-10-16 · Prior Selection for Vector Autoregressions Domenico Giannone, Michele Lenza, and Giorgio E. Primiceri NBER Working Paper

INSERT TABLE 1 HERE

The variables enter the models in annualized log-levels (i.e. we take logs and mul-tiply by 4), except those already defined in terms of annualized rates, such as interestrates, which are taken in levels. The number of lags in all the VARs is set to five.

Using each of these three datasets, we produce the BVAR forecasts recursively fortwo horizons (1 and 4 quarters), starting with the estimation sample that ranges from1959Q1 to 1974Q4. More precisely, using data from 1959Q1 to 1974Q4, we gener-ate draws from the posterior predictive density of the model for 1975Q1 (one quarterahead) and 1975Q4 (one year ahead). We then iterate the same procedure updatingthe estimation sample, one quarter at a time, until the end of the sample, i.e. 2008Q4.At each iteration, of course, we also re-estimate the posterior distribution of the hyper-parameters. The outcome of this procedure is a time-series of 137 density forecasts foreach of the two forecast horizons.

We start by assessing the accuracy of our models in terms of point forecasts, definedas the median of the predictive density at each point in time. We then turn to theevaluation of the density forecasts to assess how accurately different models capturethe uncertainty around the point forecasts.

For each variable, the target of our evaluation is defined in terms of the h-periodannualized average growth rates, i.e. zhi,t+h = 1

h [yi,t+h − yi,t]. For variables specifiedin log-levels, this is approximately the average annualized growth rate over the next hquarters, while for variables not transformed in logs this is the average quarterly changeover the next h quarters.

We compare the forecasting performance of the BVAR to a VAR with flat prior,estimated by OLS (we will refer to this model as VAR or flat-prior VAR) and a ran-dom walk with drift, which is the model implied by a dogmatic Minnesota prior (wewill refer to this model as RW). We also compare the point forecasts of the BVARto those of a single equation model, augmented with factors extracted from a largedataset using principal components.5 Factor models offer a parsimonious representa-tion for macroeconomic variables while retaining the salient features of the data thatnotoriously strongly comove. Hence, factor-augmented regressions are widely used inorder to deal with the curse of dimensionality, since a large set of potential predictorscan be replaced in the regressions by a much smaller number of factors. Factor-basedapproaches are a benchmark in the literature and have been shown to produce veryaccurate forecasts exploiting large cross sections of data. Specifically we focus on thefactor based forecasting approach of Stock and Watson (2002a,b), whose implementa-tion details are reported in appendix C. Finally, in a later subsection, we compare theforecasting performance of our hierarchical BVAR to more heuristic procedures for thechoice of hyperparameters.

5The principal components are extracted from the whole set of 149 variables described in Stock andWatson (2008).

11

Page 14: Prior Selection for Vector Autoregressions · 2012-10-16 · Prior Selection for Vector Autoregressions Domenico Giannone, Michele Lenza, and Giorgio E. Primiceri NBER Working Paper

4.1 Point forecasts

Table 2 analyzes the accuracy of point forecasts by reporting the mean squared forecasterrors (MSFE) of real GDP, the GDP deflator and the federal funds rate.

INSERT TABLE 2 HERE

Comparing models of different size, notice that it is not possible to estimate thelarge-scale VAR with a flat prior. In addition, the VAR forecasts worsen substantiallywhen moving from the small to the medium-scale model. This outcome indicates thatthe gains from exploiting larger information sets are completely offset by an increasein estimation error. On the contrary, the forecast accuracy of the BVARs does notdeteriorate when increasing the scale of the model, and sometimes even improves sub-stantially (as it is the case for inflation). In this sense, the use of priors seems to beable to turn the curse into a blessing of dimensionality. Moreover, BVAR forecasts aresystematically more accurate than the flat-prior VAR forecasts, for all the variablesand horizons that we consider.

The comparison with the RW model is also favorable to the BVARs, with the pos-sible exception of the forecasts of the federal funds rate at the one-year horizon. Theimprovement of BVARs over the RW, which is the prior model, indicates that ourinference-based choice of the hyperparameters leads to the use of informative priors,but not excessively so, letting the data shape the posterior beliefs about the model’scoefficients. Finally, notice that the performance of the prior model is particularly poorfor inflation. In fact Atkeson and Ohanian (2001) show that a random walk for thegrowth rate of the GDP deflator is a more appropriate naıve benchmark model. Specif-ically, they propose to forecast inflation over the subsequent year using the inflationrate over the past year. The MSFE of this alternative simple model for inflation at a4-quarter horizon is 1.24, which is smaller than that obtained with the random walk inlevels or with the small and medium BVARs, but higher than the corresponding MSFEof the large-scale BVAR.

Table 2 also suggests that the BVAR predictions are competitive with those of thefactor model. This outcome is in line with the findings of De Mol, Giannone, andReichlin (2008) and indicates that factor augmented and Bayesian regressions capturethe same features of the data. In fact, De Mol, Giannone, and Reichlin (2008) haveshown that Bayesian shrinkage and regressions augmented with principal componentsare strictly connected.

4.2 Density forecasts

The point forecast evaluation of the previous subsection is a useful tool to discriminateamong models, but disregards the uncertainty assigned by each model to its pointprediction. For this reason, we now turn to the evaluation of the density forecasts.We measure the accuracy of a density forecast using the log-predictive score, whichis simply the logarithm of the predictive density generated by a model, evaluated atthe realized value of the time series. Therefore, if model A has a higher average log

12

Page 15: Prior Selection for Vector Autoregressions · 2012-10-16 · Prior Selection for Vector Autoregressions Domenico Giannone, Michele Lenza, and Giorgio E. Primiceri NBER Working Paper

predictive score than model B, it means that values close to the actual realizations ofa time series were a priori more likely according to model A relative to model B.

Table 3 reports the average difference between the log predictive scores of theBVARs and the competing models (the flat-prior VAR and RW models), for eachvariable and horizon. A positive number indicates that the density forecasts producedby our proposed procedure are more accurate than those of the alternative models. Inaddition, the HAC estimate of its standard deviation (in parentheses) gives a roughidea of the statistical significance and the volatility of this difference.6

INSERT TABLE 3 HERE

Table 3 makes clear that the BVAR forecasts outperform those of the RW andflat-prior VAR also when evaluating the whole density.

4.3 Inspecting the mechanism

In this subsection, we provide some intuition about why the hierarchical proceduredescribed in the previous sections generates accurate forecasts. As we have discussed atlength in the introduction, VAR models require the estimation of many free parameters,which, when using a flat prior, leads to high estimation uncertainty and overfitting. Itis therefore beneficial to shrink the model parameters towards a parsimonious priormodel. The key to success of the hierarchical BVAR is that it automatically infers the“appropriate” amount of shrinkage, by selecting the tightness of the prior distribution.For example, the procedure will select looser priors for models with fewer parameters,and tighter priors for models with many parameters relative to the available data.

To illustrate this point, consider a much simplified version of our model, i.e. aBVAR with only a Minnesota prior, and the prior mean of the diagonal elements ofΣ set equal to the variance of the residuals of an AR(1) for each variables (as inKadiyala and Karlsson, 1997). This model is convenient because it involves only onehyperparameter, namely the hyperparameter λ governing the overall standard deviationof the Minnesota prior. For each dataset—small, medium and large—we estimate ourhierarchical BVAR on the full sample, and compute the posterior distribution of thehyperparameter λ. These posteriors are plotted in figure 1, along with the hyperprior.Notice that, in line with intuition, the posterior mode (and variance) of λ decreases withthe size of the model. In other words, the larger the size of the BVAR, the more likelyit is that we should shrink the model toward the parsimonious specification implied bythe Minnesota prior.

INSERT FIGURE 1 HERE

4.4 Comparison with alternative methods

Given the good forecasting performance of our inference-based methodology for choos-ing the hyperparameters (as good as that of factor models), a section discussing the

6Notice that the associated t-statistics corresponds to the statistics of Amisano and Giacomini (2007)with standard Normal distribution when the models are estimated using a rolling scheme. This is notthe case in our exercise since we use a recursive estimation procedure.

13

Page 16: Prior Selection for Vector Autoregressions · 2012-10-16 · Prior Selection for Vector Autoregressions Domenico Giannone, Michele Lenza, and Giorgio E. Primiceri NBER Working Paper

relative performance of alternative methods seems warranted. However, formal alterna-tives to the marginal likelihood are absent in the literature. For instance, the Bayesianor the Akaike information criteria cannot be adopted because their penalization formodel complexity only involves the number of parameters, and does not depend onthe value of the hyperparameters. As a consequence, both of these criteria would favormodels with loose priors that maximize the model in-sample fit.

An informal method to choose the hyperparameters is to maximize the model fore-casting performance over a pre-sample, as in Litterman (1980). An alternative possi-bility is to control for over-fitting by targeting a desired in-sample fit, as in Banbura,Giannone, and Reichlin (2010). These heuristic procedures can be interpreted as roughempirical Bayes estimators, and their ad-hoc nature might partly explain why BayesianVARs have encountered a number of opponents, especially among non-Bayesian re-searchers. These approaches obviously raise a number of questions: what is the rightsize of the pre-sample and the forecasting horizon? Should we minimize the MSFE orcontrol for the in-sample fit of all the variables or just those of interest? Moreover,these procedures make it hard to conduct inference incorporating hyperparameter un-certainty. Despite these limitations, these are the most popular approaches in theliterature, and we have compared them to our methodology.

Concerning the first method, we have repeated our forecasting experiment by choos-ing at each point in time the hyperparameters that maximize the past forecasting abilityof the VAR. In particular, to follow Litterman (1980) as close as possible, the measureof out-of-sample forecasting performance is the Theil-U statistic, computed over theprevious 5 years, and averaged across variables and forecasting horizons (1 to 4). Asfor the second method, we have replicated Banbura, Giannone, and Reichlin (2010) bysetting the hyperparameters in the medium and large BVARs to match the average in-sample fit of the small VAR with flat priors.7 Overall, we find that the performance ofthese two approaches is similar, and considerably worse than our methodology. In fact,they generate MSFEs that are up to 40 (Litterman, 1980) and 65 percent (Banbura,Giannone, and Reichlin, 2010) higher than those reported in table 2, with a particularlypoor forecasting performance for inflation.

Finally, note that some authors do not even perform an informal search for theoptimal hyperparameters, but simply use values from previous studies. For example,a common choice are the hyperparameters of Sims and Zha (1998), which are alsothe values around which we center our hyperpriors. We have experimented with thesefixed hyperparameters and, quite interestingly, have found that they improve over theheuristic procedures of Litterman (1980) and Banbura, Giannone, and Reichlin (2010),in our empirical application. In fact, the MSFE is only up to 20 percent worse thanour method for the small and medium BVAR, and comparable to our method forthe large BVAR. On the one hand, this result is somewhat unexpected because thesehyperparameters are not data dependent, and imply the same amount of shrinkageregardless of the size and frequency of the model. This suggests that BVARs mightimprove forecast accuracy over models with flat priors across a relatively wide range

7Banbura, Giannone, and Reichlin (2010) define the in-sample fit as the percentage deviation of thein-sample MSFE from the MSFE of the no-change forecast.

14

Page 17: Prior Selection for Vector Autoregressions · 2012-10-16 · Prior Selection for Vector Autoregressions Domenico Giannone, Michele Lenza, and Giorgio E. Primiceri NBER Working Paper

of parameter settings. On the other hand, the relatively good performance of Simsand Zha’s hyperparameters should not be very surprising, given that they seem tobe based on a rough optimization of the ML (as the authors have told us in a privateconversation). It is clear, however, that these specific values of the hyperparameters arenot guaranteed to work well for other applications—possibly outside the range of USmacroeconomic time series—and cannot be applied to different priors. On the contrary,the main appeal of our methodology is that it can be used in a wide range of models andapplications, requiring little human judgement in the search for reasonable ranges ofhyperparameters. Consequently, there is also less need for extensive robustness checksthat characterize empirical works using more ad-hoc methodologies.

5 Structural BVARs and Estimation of Impulse Response

Functions

The forecast accuracy of the hierarchical modeling procedure proposed in this paperis quite remarkable, and in line with the interpretation of the marginal likelihood as ameasure of out-of-sample forecasting performance. However, VARs are not used in theliterature only for forecasting, but also as a tool to identify structural shocks and assesstheir transmission mechanism. Inspired by an important insight of statistical decisiontheory—the separation between loss functions and probability models—we now presentevidence that the same hierarchical modeling strategy also delivers accurate estimatesof the impulse response functions to structural shocks.

More specifically, in this section we perform two exercises. First, we estimate theimpulse responses to monetary policy shocks using our large-scale BVAR with 22 vari-ables. The analysis of the effects of monetary policy innovations is widespread in theliterature because, among other things, it allows to discriminate between competingtheoretical models of the economy (Christiano, Eichenbaum, and Evans, 1999). Thepurpose of this first exercise is to demonstrate that our hierarchical procedure allowsus to obtain plausible estimates of impulse response functions even when working withlarge-scale models, which is not the case for flat-prior VARs. However, we do not havean observable counterpart of these impulse responses in the data that can be used todirectly check their accuracy. This motivates our second exercise, which is a controlledMonte Carlo experiment. In a nutshell, we simulate artificial datasets from a dynamicstochastic general equilibrium (DSGE) model, and assess the gains in accuracy for theestimation of impulse responses to monetary policy shocks of our hierarchical procedureover flat-prior VARs.

Concerning our first exercise, the monetary policy shock is identified using a rela-tively standard recursive identification scheme, assuming that prices and real activitydo not react contemporaneously to the monetary policy shock. The only variablesthat can react contemporaneously to monetary policy shocks are the financial variables(bond rates and stock prices), the exchange rate and M2, while the policy rate doesnot react contemporaneously to financial variables (see Christiano, Eichenbaum, andEvans, 1999). Figures 2, 3 and 4 report the median and the 16th and 84th percentilesof the posterior distribution of the impulse responses to a monetary policy shock esti-

15

Page 18: Prior Selection for Vector Autoregressions · 2012-10-16 · Prior Selection for Vector Autoregressions Domenico Giannone, Michele Lenza, and Giorgio E. Primiceri NBER Working Paper

mated in the large-scale model, using the full sample. The distribution of the impulseresponses encompasses both uncertainty on the parameters and hyperparameters.

INSERT FIGURES FROM 2 TO 4 HERE

A one-standard-deviation (approximately 60 basis points) exogenous increase in thefederal funds rate generates a substantial contraction in GDP, employment and all othervariables related to economic activity. Monetary aggregates also decrease on impact,indicating strong liquidity effects. Moreover, stock prices decline, the exchange rateappreciates and the yield curve flattens. Prices decrease with a delay. Notice that,with the exception of the CPI, the response of prices does not exhibit the so calledprice puzzle, i.e. a counterintuitive positive response to a monetary contraction, whichis instead typical of VARs with small information sets (on this point, see Sims, 1992b;Bernanke, Boivin, and Eliasz, 2005; Banbura, Giannone, and Reichlin, 2010). Theseresponses are all in line with intuition, and hence lend support to our hierarchicalprocedure. On the other hand, there is no formal way to assess the accuracy of thisestimation, since there is no counterpart of these responses directly observable in thedata. This is why we now turn to our second exercise.

In our controlled Monte Carlo experiment, we adopt a medium-scale DSGE modelto simulate 500 artificial time series of length of 200 quarters, for the following sevenmacro variables: output (Y), consumption (C), investment (I), hours worked (H), wages(W), prices (P) and the short-term interest rate (R). For each dataset, we estimate theimpulse responses to a monetary policy shock with our hierarchical BVAR model anda flat-prior VAR, and compare these estimates to the true impulse responses of thetheoretical model.

The DSGE that we use to simulate the data is identical to Justiniano, Primiceri,and Tambalotti (2010), with the exception that the behavior of the private sector is pre-determined with respect to the monetary policy shock, as in Christiano, Eichenbaum,and Evans (2005). This justifies the use of a recursive scheme for the identification ofmonetary policy shocks in the BVAR and the VAR. Finally, the DSGE is parameter-ized using the posterior mode of the unknown coefficients, estimated using U.S. data onoutput growth, consumption growth, investment growth, hours, wage inflation, priceinflation and the federal funds rate, as in Justiniano, Primiceri, and Tambalotti (2010).This is a good laboratory to study the question at hand, since it is well known thatthis class of medium-scale DSGE models fits the data quite well (Smets and Wouters,2007).

Figure 5 reports the theoretical DSGE impulse responses to a monetary policyshock (solid line), and the average across replications of the median responses usingour hierarchical procedure (dashed line) and the flat-prior VAR (dotted line). Both theBVAR and the VAR responses replicate the shape of the true impulse responses quitewell. In general, the bias introduced by using an informative prior is not substantiallylarger than the small sample bias of the flat-prior VAR.8

8We have also computed the impulse responses to a monetary policy shock in the theoretical VAR(5)representation of the DSGE model. These responses are extremely similar to the DSGE responses.

16

Page 19: Prior Selection for Vector Autoregressions · 2012-10-16 · Prior Selection for Vector Autoregressions Domenico Giannone, Michele Lenza, and Giorgio E. Primiceri NBER Working Paper

INSERT FIGURE 5 HERE

However, the difference between the average median across replications and thetheoretical impulse response, the bias, represents only one dimension of accuracy. Inorder to take into account also the standard deviation of the errors across replications,we need to look at the average squared error across replications.

More in details, for each replication, we compute the overall error as the differencebetween the theoretical response and the estimated median response across variablesand horizons. Then, for each variable and horizon, we take the average of the squarederrors across replications (MSE). Figure 6 reports the ratio between the MSE for theflat-prior VAR and the hierarchical BVAR.

INSERT FIGURE 6 HERE

Such a ratio is greater than one for most variables and horizons, indicating that thehierarchical BVAR yields very substantial accuracy gains. For instance, depending onthe horizon, the impulse responses of output, consumption, investment, hours and wagesbased on the BVAR can be about twice as accurate. An important exception is theresponse of the federal funds rate, which is estimated to be too persistent and to decaytoo slowly when using informative priors (see figures 5 and 6). Further experimentationreveals that this excessively persistent behavior is due to the sum-of-coefficients prior.While this prior is very important to enhance the forecasting performance of the model,the outcomes in figures 5 and 6 suggest that more sophisticated priors might be neededto discipline the behavior of the model at low frequencies. It is also reasonable to expectthat these more sophisticated priors should be based on insights coming from economictheory (on this point, see, for example, Del Negro and Schorfheide, 2004; Villani, 2009),since it is well known that the data are less informative about low frequency trends.

6 Conclusion

In this paper, we have studied the problem of how to choose the informativeness of avariety of commonly used prior distributions for VAR models. Our approach consistsof treating the coefficients of the prior as additional parameters, in the spirit of hier-archical modeling. We have shown that this approach is theoretically grounded, easyto implement, and performs very well both in terms of out-of-sample forecasting, andaccuracy in the estimation of impulse response functions. Moreover, it greatly reducesthe number and importance of subjective choices in the setting of the prior. In sum,this hierarchical modeling procedure is beneficial for both reduced-form and structuralanalysis with VARs. Moreover, this approach may prove particularly useful also forthe increasingly large literature on DSGE models. It is in fact typical in this literatureto validate a theoretical model by comparing its fit and impulse responses to those ofVARs.

17

Page 20: Prior Selection for Vector Autoregressions · 2012-10-16 · Prior Selection for Vector Autoregressions Domenico Giannone, Michele Lenza, and Giorgio E. Primiceri NBER Working Paper

A The Marginal Likelihood of BVARs with Conjugate

Priors

This appendix derives an analytical expression for the ML of BVARs with conjugatepriors (possibly implemented using dummy observations), and proves the fit-complexitytrade-off result that we have stated in (2.5) in the main text.

A.1 Analytical derivation of the ML

Consider the VAR model of section 1

yt = C +B1yt−1 + ...+Bpyt−p + εt, t = 1, ..., T

εt ∼ N (0,Σ) ,

and rewrite it as

Y = Xβ + ε

ε ∼ N (0,Σ ⊗ IT−p) ,

where y ≡ [yp+1, ..., yT ]′, Y ≡ vec (y), xt ≡ [1, y′t−1, ..., y′t−p]

′, Xt ≡ In ⊗ x′t, x ≡

[xp+1, ..., xT ]′, X ≡ In ⊗ x, ε ≡ [εp+1, ..., εT ]′, ε ≡ vec (ε), B ≡ [C,B1, ..., Bp]′ and

β ≡ vec(B). Finally, define the number of regressors for each equation by k ≡ np+ 1.As in section 2, the prior on (β,Σ) is given by the following Normal-Inverse-Wishart

distribution9

Σ ∼ IW (Ψ, d)

β|Σ ∼ N (b,Σ ⊗ Ω) ,

where, for simplicity, we are not explicitly conditioning on the hyperparameters b, Ω,Ψ and d.

The unnormalized posterior of (β,Σ) can be obtained by multiplying the priordensity by the likelihood function. If we condition on the initial p observations of thesample, which is a standard assumption, we obtain:

p (β,Σ|Y ) =

(1

)n(T−p+k)2

|Σ|−T−p+k+n+d+1

2 |Ω|−n2 |Ψ|

d2e−

12tr(ΨΣ−1)

2nd2 · Γn

(d2

) ·

e

− 12

[

(Y −Xβ)′ (Σ ⊗ IT )−1 (Y −Xβ) +

+ (β − b)′ (Σ ⊗ Ω)−1 (β − b)

]

. (A.6)

9We are using the following parameterization of the Inverse Wishart density: p (Σ|Ψ, d) =

|Ψ|d

2 ·|Σ|−

n+d+12 ·e

12

tr(ΨΣ−1)

2nd

2 ·Γn( d

2 ).

18

Page 21: Prior Selection for Vector Autoregressions · 2012-10-16 · Prior Selection for Vector Autoregressions Domenico Giannone, Michele Lenza, and Giorgio E. Primiceri NBER Working Paper

Tedious algebraic manipulations of (A.6) yield the expression

p (β,Σ|Y ) =

(1

)n(T−p+k)2

|Σ|−T−p+k+n+d+1

2 |Ω|−n2 |Ψ|

d2e−

12tr(ΨΣ−1)

2nd2 · Γn

(d2

) ·

e

− 12

(

β − β)′ [

X ′ (Σ ⊗ IT )−1X + (Σ ⊗ Ω)−1] (

β − β)

+

+(

β − b)′

(Σ ⊗ Ω)−1(

β − b)

+ ε′ (Σ ⊗ IT )−1 ε

,(A.7)

where B ≡(x′x+ Ω−1

)−1(

x′y + Ω−1b)

, β ≡ vec(

B)

, ε ≡ y − xB, ε ≡ vec (ε), and b

is a k × n matrix obtained by reshaping the vector b in such a way that each column

corresponds to the prior mean of the coefficients of each equation (i.e. b ≡ vec(

b)

).

It can then be shown that (A.7) is the kernel of the following Normal-Inverse-Wishartposterior distribution:

Σ|Y ∼ IW

(

Ψ + ε′ε+(

B − b)′

Ω−1(

B − b)

, T − p+ d

)

(A.8)

β|Σ, Y ∼ N

(

β,Σ ⊗(

x′x+ Ω−1)−1

)

. (A.9)

The ML is the integral of the unnormalized posterior:

p (Y ) =

∫ ∫

p (Y |β,Σ) · p (β|Σ) · p (Σ) dβdΣ. (A.10)

Let’s start with the integral with respect to β. Substituting (A.7) into (A.10) we obtain

p (Y,Σ) =

(12π

)n(T−p+k)2 |Σ|−

T−p+k+n+d+12 |Ω|−

n2 |Ψ|

d2 e

−12

tr(ΨΣ−1)

2nd2 ·Γn( d

2)·

e

− 12

(

β − β)′ [

X ′ (Σ ⊗ IT−p)−1X + (Σ ⊗ Ω)−1

] (

β − β)

+

+(

β − b)′

(Σ ⊗ Ω)−1(

β − b)

+ ε′ (Σ ⊗ IT−p)−1 ε

dβ,

which can be solved by “completing the squares,” yielding

p (Y,Σ) =

(1

)n(T−p)2

|Σ|−T−p+n+d+1

2 |Ω|−n2 |Ψ|

d2e−

12tr(ΨΣ−1)

2nd2 · Γn

(d2

) ·

e− 1

2

[

(β−b)′(Σ⊗Ω)−1(β−b)+ε′(Σ⊗IT−p)

−1ε

]

·∣∣∣x′x+ Ω−1

∣∣∣

−n2 .

We are now ready to take the integral with respect to Σ:

p (Y ) =

(1

)n(T−p)2

|Ω|−n2 |Ψ|

d2

1

2nd2 · Γn

(d2

)

∣∣∣x′x+ Ω−1

∣∣∣

−n2

|Σ|−T−p+n+d+1

2 e−12tr(ΨΣ−1)·

e

− 12

[(

β − b)′

(Σ ⊗ Ω)−1(

β − b)

+ ε′ (Σ ⊗ IT−p)−1 ε

]

︸ ︷︷ ︸

P

dΣ.(A.11)

19

Page 22: Prior Selection for Vector Autoregressions · 2012-10-16 · Prior Selection for Vector Autoregressions Domenico Giannone, Michele Lenza, and Giorgio E. Primiceri NBER Working Paper

The expression for P can be simplified by using the following property of the vecoperator:

vec (A)′ (D ⊗B) vec (C) = tr(A′BCD′) .

This yields

P = tr

[

ε′εΣ−1 +(

B − b)′

Ω−1(

B − b)

Σ−1]

. (A.12)

We can now solve the integral by substituting (A.12) into (A.11), and multiplying anddividing the expression inside the integral by the constant term necessary to obtainthe density of an Inverse-Wishart. These operations result in the following closed-formsolution for the ML:

p (Y ) =

(1

π

)n(T−p)2 Γn

(T−p+d

2

)

Γn(d2

) ·

|Ω|−n2 · |Ψ|

d2 ·

∣∣∣x′x+ Ω−1

∣∣∣

−n2 ·

∣∣∣∣Ψ + ε′ε+

(

B − b)′

Ω−1(

B − b)∣∣∣∣

−T−p+d

2

. (A.13)

A.2 Numerical issues

For large systems, (A.13) is numerically unstable and it is convenient to replace it withthe equivalent expression

p (Y ) =

(1

π

)n(T−p)2 Γn

(T−p+d

2

)

Γn(d2

) ·

|Ψ|−T−p

2 ·∣∣D′

Ωx′xDΩ + Ik

∣∣−

n2 ·

∣∣∣∣In +D′

Ψ

[

ε′ε+(

B − b)′

Ω−1(

B − b)]

∣∣∣∣

−T−p+d

2

, (A.14)

where DΩD′Ω = Ω and DΨD

′Ψ = Ψ−1. The last two determinants can be computed as

the product of one plus the eigenvalues ofD′Ωx

′xDΩ andD′Ψ

[

ε′ε+(

B − b)′

Ω−1(

B − b)]

respectively, which is numerically stable.

A.3 The ML with dummy observations

It is common in the literature to implement some conjugate priors using dummy obser-vations (e.g. the sum-of-coefficients and the dummy-initial-observation priors). In thiscase, the ML is given by p (Y ⊕) /p (Y ∗), where p (·) is the function (A.13) or (A.14),Y ∗ denotes the dummy observations, and Y ⊕ is the extended set of data, consisting ofY and Y ∗.

20

Page 23: Prior Selection for Vector Autoregressions · 2012-10-16 · Prior Selection for Vector Autoregressions Domenico Giannone, Michele Lenza, and Giorgio E. Primiceri NBER Working Paper

A.4 Proof of the fit-complexity trade-off result

In order to prove (2.5), rewrite (A.13) as

p (Y ) =

(1

π

)n(T−p)2 Γn

(T−p+d

2

)

Γn(d2

) ·

(T − p+ d+ n+ 1

d+ n+ 1

)−T−p+d

2

|Ω|−n2 · |Ψ|−

T−p

2 ·∣∣∣x′x+ Ω−1

∣∣∣

−n2 ·

∣∣∣∣∣∣∣

Ψ + ε′ε+(

B − b)′

Ω−1(

B − b)

T − p+ d+ n+ 1

−1

Ψ

d+ n+ 1

∣∣∣∣∣∣∣

T−p+d

2

. (A.15)

Define xt ≡ [xp+1, ..., xt]′ and notice that xt′xt can be written recursively as xt′xt =

xt−1′xt−1+xtx′t. The matrix determinant lemma (Harville, 1997) implies that

∣∣xt′xt + Ω−1

∣∣

can also be expressed recursively as∣∣∣xt′xt + Ω−1

∣∣∣ =

∣∣∣xt−1′xt−1 + Ω−1

∣∣∣ ·

(

1 + x′t

(

xt−1′xt−1 + Ω−1)−1

xt

)

. (A.16)

The iteration of (A.16), starting from the initial value∣∣Ω−1

∣∣, allows to derive

∣∣∣x′x+ Ω−1

∣∣∣ =

∣∣∣Ω−1

∣∣∣

T∏

t=p+1

(

1 + x′t

(

xt−1′xt−1 + Ω−1)−1

xt

)

.

If we substitute this last expression into (A.15), we obtain

p (Y ) =

(1

π

)n(T−p)2 Γn

(T−p+d

2

)

Γn(d2

) ·

(T − p+ d+ n+ 1

d+ n+ 1

)−T−p+d

2

|Ψ|−T−p

2

T∏

t=p+1

(

1 + x′t

(

xt−1′xt−1 + Ω−1)−1

xt

)−n2

∣∣∣∣∣∣∣

Ψ + ε′ε+(

B − b)′

Ω−1(

B − b)

T − p+ d+ n+ 1

−1

Ψ

d+ n+ 1

∣∣∣∣∣∣∣

T−p+d

2

,

which, using the properties of the determinants and Kronecker products, can be rewrit-ten as

p (Y ) =

(1

π

)n(T−p)2 Γn

(T−p+d

2

)

Γn(d2

) ·

(T − p+ d+ n+ 1

d+ n+ 1

)−T−p+d

2

T∏

t=p+1

∣∣∣∣Ψ ⊗

(

1 + x′t

(

xt−1′xt−1 + Ω−1)−1

xt

)∣∣∣∣

− 12

∣∣∣∣∣∣∣

Ψ + ε′ε+(

B − b)′

Ω−1(

B − b)

T − p+ d+ n+ 1

−1

Ψ

d+ n+ 1

∣∣∣∣∣∣∣

T−p+d

2

.

21

Page 24: Prior Selection for Vector Autoregressions · 2012-10-16 · Prior Selection for Vector Autoregressions Domenico Giannone, Michele Lenza, and Giorgio E. Primiceri NBER Working Paper

Finally, notice that

[

var(

yt|yt−1,Σ

)]

= EΣ

[

Xt

(

Σ ⊗(

xt−1′xt−1 + Ω−1)−1

)

X ′t + Σ

]

= EΣ

[

Σ ⊗

(

1 + x′t

(

xt−1′xt−1 + Ω−1)−1

xt

)]

d− n− 1⊗

(

1 + x′t

(

xt−1′xt−1 + Ω−1)−1

xt

)

,

where EΣ denotes the expectation operator with respect to Σ. We can now express theML as

p (Y ) = const ·

∣∣∣∣

(

V posteriorε

)−1V priorε

∣∣∣∣

T−p+d

2

·T∏

t=p+1

∣∣∣Vt|t−1

∣∣∣

− 12

const ≡

(1

π

)n(T−p)2 Γn

(T−p+d

2

)

Γn(d2

) ·(d− n− 1)

d2

(T − p+ d− n− 1)T−p+d

2

Vt|t−1 ≡ EΣ

[

var(

yt|yt−1,Σ

)]

d− n− 1⊗

(

1 + x′t

(

xt−1′xt−1 + Ω−1)−1

xt

)

V priorε ≡ E [Σ] =

Ψ

d− n− 1

V posteriorε ≡ E [Σ|Y ] =

Ψ + ε′ε+(

B − b)′

Ω−1(

B − b)

T − p+ d− n− 1,

where V priorε and V posterior

ε are the prior and posterior means of the residual variance,and their analytical expressions follow from the properties of the Inverse-Wishart dis-tribution.

B The MCMC Algorithm

This appendix presents the details of the MCMC algorithm that we use to simulate theposterior of the coefficients of the BVAR, including the hyperparameters. We use thefollowing standard Metropolis algorithm:

1. Initialize the hyperparameters γ at their posterior mode, which requires a numer-ically maximization.

2. Draw a candidate value of the hyperparameters γ∗ from a Gaussian proposaldistribution, with mean equal to γ(j−1) and variance equal to c ·W , where γ(j−1)

is the previous draw of γ, W is the inverse Hessian of the negative of the log-posterior of the hyperparameters at the peak, and c is a scaling constant chosento obtain an acceptance rate of approximately 20 percent.

3. Set

γ(j) =

γ∗ with pr. α(j)

γ(j−1) with pr. 1 − α(j),

22

Page 25: Prior Selection for Vector Autoregressions · 2012-10-16 · Prior Selection for Vector Autoregressions Domenico Giannone, Michele Lenza, and Giorgio E. Primiceri NBER Working Paper

where

α(j) = min

1,p (γ∗|y)

p(γ(j−1)|y

)

4. Draw[

β(j),Σ(j)]

from p(

β,Σ|y, γ(j))

, which is the density of the Normal-Inverse-

Wishart distribution in (A.8)-(A.9).

5. Increment j to j + 1 and go to 2.

C Factor Augmented Regression

We consider the following forecasting equation:

zhi,t+h = ci +pz−1∑

s=0

αi,szi,t−s +r∑

k=1

λikfk,t + ehi,t+h

where zhi,t+h denotes the h-step ahead variable to be forecasted. The predictors fk,t, k =1, ..., r are common factors extracted from the set of all variables. The lags of the targetvariable zi,t−s are explicitly used as predictors in order to capture variable specificdynamics. The regression coefficients are allowed to differ across forecast horizons, butthe dependence is dropped for notational convenience.

The estimation of the forecasting equation is performed in two steps, as in Stockand Watson (2002a,b). In the first step, the common factors fk,t are estimated byprincipal components extracted from a large set of 149 predictors. Before extractingthe common factors, the data are transformed in order to achieve stationarity andstandardized. For details on data definitions and transformations see table 1 and Stockand Watson (2008).

In the second step, the coefficients are estimated by ordinary least squares. Using allthe principal components (i.e. by setting r equal to the number of variables 149) wouldbe equivalent to running an OLS regression on all the available regressors. Therefore,as in Stock and Watson (2008), we set r = 3 and pz = 4.

References

Amisano, G., and R. Giacomini (2007): “Comparing density forecasts via weightedlikelihood ratio tests,” Journal of Business and Economic Statistics, 25, 177–190.

Atkeson, A., and L. E. Ohanian (2001): “Are Phillips curves useful for forecastinginflation?,” Quarterly Review, Federal Reserve Bank of Minneapolis, (Win), 2–11.

Banbura, M., D. Giannone, and L. Reichlin (2010): “Large Bayesian VARs,”Journal of Applied Econometrics, 25(1), 71–92.

Belmonte, M., G. Koop, and D. Korobilis (2011): “Hierarchical shrinkage intime-varying parameter models,” MPRA Paper 31827, University Library of Munich,Germany.

23

Page 26: Prior Selection for Vector Autoregressions · 2012-10-16 · Prior Selection for Vector Autoregressions Domenico Giannone, Michele Lenza, and Giorgio E. Primiceri NBER Working Paper

Berger, J. O. (1985): Statistical Decision Theory and Bayesian Analysis. Berlin:Springer-Verlag.

Berger, J. O., and L. Berliner (1986): “Robust Bayes and Empirical Bayes Anal-ysis with # -Contaminated Priors,” The Annals of Statistics, 14, 461–486.

Berger, J. O., W. Strawderman, and T. Dejung (2005): “Posterior Propertyand Admissibility of Hyperpriors in Normal Hierarchical Models,” The Annals of

Statistics, 33(2), 604–646.

Bernanke, B., J. Boivin, and P. S. Eliasz (2005): “Measuring the Effects ofMonetary Policy: A Factor-augmented Vector Autoregressive (FAVAR) Approach,”The Quarterly Journal of Economics, 120(1), 387–422.

Bloor, C., and T. Matheson (2009): “Real-time conditional forecasts with BayesianVARs: An application to New Zealand,” Reserve Bank of New Zealand DiscussionPaper Series DP2009/02, Reserve Bank of New Zealand.

Canova, F. (2007): Methods for Applied Macroeconomic Research. Princeton Univer-sity Press.

Carriero, A., T. Clark, and M. Marcellino (2011): “Bayesian VARs: specifica-tion choices and forecast accuracy,” Discussion paper.

Carriero, A., G. Kapetanios, and M. Marcellino (2009): “Forecasting exchangerates with a large Bayesian VAR,” International Journal of Forecasting, 25(2), 400–417.

(2010): “Forecasting Government Bond Yields,” mimeo, University of London.

Christiano, L. J., M. Eichenbaum, and C. L. Evans (1999): “Monetary policyshocks: What have we learned and to what end?,” in Handbook of Macroeconomics,ed. by J. B. Taylor, and M. Woodford, vol. 1 of Handbook of Macroeconomics, chap. 2,pp. 65–148. Elsevier.

(2005): “Nominal Rigidities and the Dynamic Effects of a Shock to MonetaryPolicy,” Journal of Political Economy, 113(1), 1–45.

De Mol, C., D. Giannone, and L. Reichlin (2008): “Forecasting using a largenumber of predictors: Is Bayesian shrinkage a valid alternative to principal compo-nents?,” Journal of Econometrics, 146(2), 318–328.

Del Negro, M., and F. Schorfheide (2004): “Priors from General EquilibriumModels for VARS,” International Economic Review, 45(2), 643–673.

(2011): “Bayesian Macroeconometrics,” in The Oxford Handbook of Bayesian

Econometrics, ed. by J. Geweke, G. Koop, and H. van Dijk, pp. 293–389. OxfordUniversity Press.

24

Page 27: Prior Selection for Vector Autoregressions · 2012-10-16 · Prior Selection for Vector Autoregressions Domenico Giannone, Michele Lenza, and Giorgio E. Primiceri NBER Working Paper

Del Negro, M., F. Schorfheide, F. Smets, and R. Wouters (2007): “On the Fitof New Keynesian Models,” Journal of Business & Economic Statistics, 25, 123–143.

Doan, T., R. Litterman, and C. A. Sims (1984): “Forecasting and ConditionalProjection Using Realistic Prior Distributions,” Econometric Reviews, 3, 1–100.

Forni, M., M. Hallin, M. Lippi, and L. Reichlin (2000): “The Generalized Dy-namic Factor Model: identification and estimation,” Review of Economics and Statis-

tics, 82, 540–554.

Gelman, A., J. B. Carlin, H. S. Stern, and D. B. Rubin (2004): Bayesian Data

Analysis: Second Edition. Boca Raton: Chapman and Hall CRC.

Geweke, J. (2001): “Bayesian econometrics and forecasting,” Journal of Economet-

rics, 100(1), 11–15.

Geweke, J., and C. Whiteman (2006): “Bayesian Forecasting,” in Handbook of

Economic Forecasting, ed. by G. Elliott, C. Granger, and A. Timmermann, chap. 1,pp. 3–80. Elsevier.

Giannone, D., M. Lenza, D. Momferatou, and L. Onorante (2010): “Short-Term Inflation Projections: a Bayesian Vector Autoregressive approach,” CEPRDiscussion Papers 7746, C.E.P.R. Discussion Papers.

Giannone, D., M. Lenza, and L. Reichlin (2008): “Explaining The Great Moder-ation: It Is Not The Shocks,” Journal of the European Economic Association, 6(2-3),621–633.

Harville, D. (1997): Matrix Algebra from a Statistician’s Perspective. Springer Verlag.

Jarociski, M., and A. Marcet (2010): “Autoregressions in small samples, priorsabout observables and initial conditions,” Working Paper Series 1263, EuropeanCentral Bank.

Justiniano, A., G. E. Primiceri, and A. Tambalotti (2010): “Investment shocksand business cycles,” Journal of Monetary Economics, 57(2), 132–145.

Kadiyala, K. R., and S. Karlsson (1997): “Numerical Methods for Estimationand Inference in Bayesian VAR-Models,” Journal of Applied Econometrics, 12(2),99–132.

Karlsson, S. (2012): “Forecasting with Bayesian Vector Autoregressions,” WorkingPapers 2012:12, Orebro University, Swedish Business School.

Knox, T., J. H. Stock, and M. W. Watson (2000): “Empirical Bayes Forecastsof One Time Series Using Many Predictors,” Econometric Society World Congress2000 Contributed Papers 1421, Econometric Society.

Koop, G. (2003): Bayesian Econometrics. Wiley.

25

Page 28: Prior Selection for Vector Autoregressions · 2012-10-16 · Prior Selection for Vector Autoregressions Domenico Giannone, Michele Lenza, and Giorgio E. Primiceri NBER Working Paper

(2011): “Forecasting with Medium and Large Bayesian VARs,” Journal of

Applied Econometrics, forthcoming.

Koop, G., and D. Korobilis (2010): “Bayesian Multivariate Time Series Methodsfor Empirical Macroeconomics,” Foundations and Trends in Econometrics, 3(4), 267–358.

Litterman, R. (1979): “Techniques of forecasting using vector autoregressions,” Fed-eral Reserve of Minneapolis Working Paper 115.

(1980): “A Bayesian Procedure for Forecasting with Vector Autoregression.,”Working paper, Massachusetts Institute of Technology, Department of Economics.

(1986): “Forecasting With Bayesian Vector Autoregressions – Five Years ofExperience,” Journal of Business and Economic Statistics, 4, 25–38.

Lopes, H. F., A. R. B. Moreira, and A. M. Schmidt (1999): “Hyperparameterestimation in forecast models,” Comput. Stat. Data Anal., 29(4), 387–410.

Phillips, P. C. (1995): “Automated Forecasts of Asia-Pacific Economic Activity,”Cowles foundation discussion papers, Cowles Foundation for Research in Economics,Yale University.

Phillips, P. C., and W. Ploberger (1994): “Posterior Odds Testing for a UnitRoot with Data-Based Model Selection,” Econometric Theory, 10(3-4), 774–808.

Primiceri, G. E. (2005): “Time Varying Structural Vector Autoregressions and Mon-etary Policy,” Review of Economic Studies, 72, 821–852.

Robbins, H. (1956): “An Empirical Bayes Approach to Statistics,” Proceedings of the

Third Berkeley Symposium on Mathematical Statistics and Probability, Volume 1:

Contributions to the Theory of Statistics, pp. 157–163.

Robertson, J. C., and E. W. Tallman (1999): “Vector autoregressions: forecastingand reality,” Economic Review, (Q1), 4–18.

Sims, C. A. (1980): “Macroeconomics and Reality,” Econometrica, 48(1), 1–48.

Sims, C. A. (1992a): “Bayesian inference for multivariate time series with trend,”mimeo, Princeton University.

(1992b): “Interpreting the macroeconomic time series facts: the effects ofmonetary policy,” European Economic Review, 36, 975–1000.

(1993): “A Nine-Variable Probabilistic Macroeconomic Forecasting Model,” inBusiness Cycles, Indicators and Forecasting, NBER Chapters, pp. 179–212. NationalBureau of Economic Research, Inc.

Sims, C. A., and T. Zha (1998): “Bayesian Methods for Dynamic Multivariate Mod-els,” International Economic Review, 39(4), 949–68.

26

Page 29: Prior Selection for Vector Autoregressions · 2012-10-16 · Prior Selection for Vector Autoregressions Domenico Giannone, Michele Lenza, and Giorgio E. Primiceri NBER Working Paper

Smets, F., and R. Wouters (2007): “Shocks and Frictions in US Business Cycles:A Bayesian DSGE Approach,” American Economic Review, 97(3), 586–606.

Stein, C. (1956): “Inadmissibility of the Usual Estimator for the Mean of a Multivari-ate Normal Distribution,” Proc. Third Berkeley Symp. on Math. Statist. and Prob.,1, 197–206.

Stock, J. H., and M. W. Watson (2002a): “Forecasting Using Principal Com-ponents from a Large Number of Predictors,” Journal of the American Statistical

Association, 97, 147–162.

(2002b): “Macroeconomic Forecasting Using Diffusion Indexes,” Journal of

Business and Economics Statistics, 20, 147–162.

(2008): “Forecasting in Dynamic Factor Models Subject to Structural Insta-bility,” in The Methodology and Practice of Econometrics, A Festschrift in Honour

of Professor David F. Hendry, ed. by J. Castle, and N. Shephard. Oxford UniversityPress.

Villani, M. (2009): “Steady-state priors for vector autoregressions,” Journal of Ap-

plied Econometrics, 24(4), 630–650.

Wright, J. H. (2009): “Forecasting US inflation by Bayesian model averaging,” Jour-

nal of Forecasting, 28(2), 131–144.

27

Page 30: Prior Selection for Vector Autoregressions · 2012-10-16 · Prior Selection for Vector Autoregressions Domenico Giannone, Michele Lenza, and Giorgio E. Primiceri NBER Working Paper

Tables

Table 1: The description of the database

Variables Mnemonic Transf. Transf. Small Medium LargeBVAR Factor Model BVAR BVAR BVAR

Real GDP RGDP log log difference x x xGDP deflator PGDP logs log difference x x xFederal Funds Rate FedFunds raw difference x x xCPI CPI-ALL logs log difference xCommodity Price Com:spotprice(real) logs log difference xIndustrial Production IP:total logs log difference xEmployment Emp:total logs log difference xUnemployment Emp:services raw difference xReal Consumption Cons logs log difference x xReal Investment Inv logs log difference xResidential Investment Res.Inv logs log difference xNon Residential Investment NonResInv logs log difference xPersonal Consumption Expenditures, Price Index PCED logs log difference xGross Private Domestic Investment, Price Index PGPDI logs log difference xCapacity Utilization CapacityUtil raw difference xConsumer expectations Consumerexpect raw difference xHours Worked Emp.Hours logs log difference x xReal compensation per hours RealComp/Hour logs log difference x xOne year bond rate 1yrT-bond raw difference xFive years bond rate 5yrT-bond raw difference xSP500 S&P500 logs log difference xEffective exchange rate Exrate:avg logs log difference xM2 M2 logs log difference x

Table 2: Mean squared forecast errors of point forecastsSmall (S) Medium (M) Large (L) Factor M. RW

Horizons Variables VAR BVAR VAR BVAR VAR BVAR

One Quarter Real GDP 13.57 9.61 19.18 7.97 8.18 7.29 10.23GDP Deflator 1.54 1.32 2.27 1.35 1.10 1.14 5.19

Federal Funds Rates 1.61 1.04 1.83 1.03 1.00 1.25 1.06

One Year Real GDP 5.39 3.85 11.90 3.42 3.97 3.52 3.98GDP Deflator 1.61 1.45 2.22 1.58 0.96 1.01 4.65

Federal Funds Rates 0.58 0.32 0.56 0.31 0.36 0.32 0.31

Note: The table reports the mean squared forecast errors of the BVARs and the competing models (VAR: flat-prior VAR, RW:

Random Walk in levels with drift, Factor M.: factor augmented regression), for each variable and horizon. The evaluation sample

is 1975Q1 - 2008Q4 for the one quarter ahead forecasts and 1975Q4 - 2008Q4 for the one year ahead forecasts.

28

Page 31: Prior Selection for Vector Autoregressions · 2012-10-16 · Prior Selection for Vector Autoregressions Domenico Giannone, Michele Lenza, and Giorgio E. Primiceri NBER Working Paper

Table 3: Average difference of log-scoresSmall (S) Medium (M) Large (L)

Horizons Variables vs VAR vs RW vs VAR vs RW vs VAR vs RW

One Quarter Real GDP 0.10 0.06 0.31 0.16 0.17(0.04) (0.05) (0.05) (0.06) (0.06)

GDP Deflator 0.05 0.74 0.15 0.73 0.81(0.03) (0.09) (0.05) (0.09) (0.09)

Federal Funds Rates 0.07 0.06 0.10 0.07 0.09(0.07) (0.08) (0.13) (0.08) (0.10)

One Year Real GDP 0.11 0.00 0.43 0.06 0.03(0.07) (0.09) (0.12) (0.09) (0.13)

GDP Deflator 0.05 1.00 0.02 0.88 1.18(0.10) (0.33) (0.22) (0.36) (0.30)

Federal Funds Rates 0.26 0.07 0.27 0.05 -0.03(0.07) (0.07) (0.12) (0.09) (0.12)

Note: The table reports the average difference between the log predictive scores of the BVARs and the competing models (the

flat-prior VAR and RW models), for each variable and horizon. The HAC estimate of the standard deviation of the difference

between the log predictive scores of the BVARs and the competing models is reported in parenthesis. The evaluation sample is

1975Q1 - 2008Q4 for the one quarter ahead forecasts and 1975Q4 - 2008Q4 for the one year ahead forecasts.

29

Page 32: Prior Selection for Vector Autoregressions · 2012-10-16 · Prior Selection for Vector Autoregressions Domenico Giannone, Michele Lenza, and Giorgio E. Primiceri NBER Working Paper

Figures

Figure 1: Posterior distribution of the hyperparameter gov-erning the variance of the Minnesota Prior

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.80

10

20

30

40

50

60

LargeMediumSmallHyperprior

Note: The figure reports the posterior distribution of the hyperparameter λ, the parame-ter governing the variance of the Minnesota prior in the small, medium, large BVARs, andits prior distribution. The posterior distribution is obtained by using the whole sample.

30

Page 33: Prior Selection for Vector Autoregressions · 2012-10-16 · Prior Selection for Vector Autoregressions Domenico Giannone, Michele Lenza, and Giorgio E. Primiceri NBER Working Paper

Figure 2: Impulse responses of real variables

5 10 15 20−2

−1

0

RGDP

5 10 15 20−2

−1

0

Cons

5 10 15 20−2

−1

0

Real Comp/Hour

5 10 15 20−3

−2

−1

0Emp: total

5 10 15 20−3

−2

−1

0Emp: services

5 10 15 20−3

−2

−1

0Emp. Hours

0 10 20

−3

−2

−1

0IP: total

0 10 20

−0.6

−0.4

−0.2

0Capacity Util

0 10 20−1

0

1

Consumer expect

0 10 20−10

−5

0Res.Inv

0 10 20−10

−5

0NonResInv

Note: The figure reports the median (solid line) and the 16th and 84th percentiles(dashed lines) of the distribution of the impulse response functions of the large BVAR toa one standard deviation monetary policy shock.

31

Page 34: Prior Selection for Vector Autoregressions · 2012-10-16 · Prior Selection for Vector Autoregressions Domenico Giannone, Michele Lenza, and Giorgio E. Primiceri NBER Working Paper

Figure 3: Impulse responses of nominal variables

0 10 20−4

−3

−2

−1

0

1PGDP

0 5 10 15 20−4

−3

−2

−1

0

1CPI−ALL

0 10 20−4

−3

−2

−1

0

1PCED

0 10 20−10

−8

−6

−4

−2

0

PGPDI

0 5 10 15 20−10

−8

−6

−4

−2

0

Com: spot price (real)

Note: The figure reports the median (solid line) and the 16th and 84th percentiles(dashed lines) of the distribution of the impulse response functions of the large BVAR toa one standard deviation monetary policy shock.

32

Page 35: Prior Selection for Vector Autoregressions · 2012-10-16 · Prior Selection for Vector Autoregressions Domenico Giannone, Michele Lenza, and Giorgio E. Primiceri NBER Working Paper

Figure 4: Impulse responses of financial variables

0 10 20−0.2

0

0.2

0.4

0.6

FedFunds

0 10 20−0.2

0

0.2

0.4

0.6

1 yr T−bond

0 10 20−0.2

0

0.2

0.4

0.6

5 yr T−bond

0 10 20

−4

−2

0

2

4

6S&P 500

0 10 200

2

4

6

Ex rate: avg

0 10 20−2.5

−2

−1.5

−1

−0.5

0M2

Note: The figure reports the median (solid line) and the 16th and 84th percentiles(dashed lines) of the distribution of the impulse response functions of the large BVAR toa one standard deviation monetary policy shock.

33

Page 36: Prior Selection for Vector Autoregressions · 2012-10-16 · Prior Selection for Vector Autoregressions Domenico Giannone, Michele Lenza, and Giorgio E. Primiceri NBER Working Paper

Figure 5: Impulse responses on simulated data

0 10 20

−0.25−0.2

−0.15−0.1

−0.05

Y

DSGEBVARVAR

0 10 20

−0.1

−0.05

0C

0 10 20

−0.8

−0.6

−0.4

−0.2

0I

0 10 20

−0.25

−0.2

−0.15

−0.1

−0.05

H

0 10 20

−0.25−0.2

−0.15−0.1

−0.05

W

0 10 20−0.25

−0.2

−0.15

−0.1

−0.05

P

0 10 200

0.05

0.1

0.15

0.2R

Note: The figure reports the impulse responses to a monetary policy shock in the DSGEmodel used to generate the data and the median across Monte Carlo replications of theBVAR and the VAR impulse responses.

34

Page 37: Prior Selection for Vector Autoregressions · 2012-10-16 · Prior Selection for Vector Autoregressions Domenico Giannone, Michele Lenza, and Giorgio E. Primiceri NBER Working Paper

Figure 6: Ratio of MSE: VAR versus BVAR

0 10 200

1

2

3Y

0 10 200

1

2

3C

0 10 200

1

2

3I

0 10 200

1

2

3H

0 10 200

1

2

3W

0 10 200

1

2

3P

0 10 200

1

2

3R

Note: The figure reports the ratio of the MSE of the VAR over the MSE of the BVAR.Values larger than one indicate that the MSE of the VAR is larger than that of the BVAR.

35