38
Dynamic Stochastic General Equilibrium Models Dr. Andrea Beccarini Willi Mutschler Summer 2012 A. Beccarini () DSGE Summer 2012 1 / 27

DSGE Solving

Embed Size (px)

DESCRIPTION

dsge solver

Citation preview

Page 1: DSGE Solving

Dynamic Stochastic General Equilibrium Models

Dr. Andrea Beccarini Willi Mutschler

Summer 2012

A. Beccarini () DSGE Summer 2012 1 / 27

Page 2: DSGE Solving

Note on Dynamic ProgrammingTheStandardRecursiveMethod

One considers a representative agent whose objective is to �nd a control(or decision) sequence futg∞

t=0 such that

E0

"∞

∑t=0

βtU(xt , ut )

#(1)

is maximised, subject to

xt+1 = F (xt , ut , zt ) (2)

A. Beccarini (CQE) DSGE Summer 2012 2 / 27

Page 3: DSGE Solving

where

xt is a vector of m state variables at period t; ut is a vector of ncontrol variables;

zt is a vector of s exogenous variables whose dynamics does notdepend on xt and ut ;

β 2 (0, 1) denotes the discount factor.The uncertainty of the model only comes from the exogenous zt .

Assume that zt follow an AR(1) process:

zt+1 = a+ bzt + εt+1 (3)

where ε is independently and identically distributed (i.i.d.).

The initial condition (x0, z0) in this formulation is assumed to begiven.

A. Beccarini (CQE) DSGE Summer 2012 3 / 27

Page 4: DSGE Solving

The problem to solve a dynamic decision problem is to seek atime-invariant policy function G mapping from the state andexogenous (x , z) into the control u.

ut = G (xt , zt ) (4)

With such a policy function (or control equation), the sequences ofstate fxtg∞

t=0 and control futg∞t=0 can be generated by iterating the

control equation as well as the state eq. (2), given the initialcondition (x0, z0) and the exogenous sequence fztg∞

t=0 generated byeq. (3).

A. Beccarini (CQE) DSGE Summer 2012 4 / 27

Page 5: DSGE Solving

To �nd the policy function G by the recursive method, one �rstde�nes a value function V:

V (x0, z0) = maxfutg∞

t=0

E0

"∞

∑t=0

βtU(xt , ut )

#(5)

Expression (5) could be transformed to unveil its recursive structure.

A. Beccarini (CQE) DSGE Summer 2012 5 / 27

Page 6: DSGE Solving

For this purpose, one �rst rewrites eq. (5) as follows:

V (x0, z0) = maxfutg∞

t=0

(U(x0, u0) + βE0

"∞

∑t=0

βtU(xt+1, ut+1)

#)(6)

It is easy to �nd that the second term in eq. (6) can be expressed asbeing β times the value V as de�ned in eq. (5) with the initialcondition (x1, z1).Therefore, we could rewrite eq. (5) as

V (x0, z0) = maxfutg∞

t=0

fU(x0, u0) + βE0 [V (x1, z1)]g (7)

A. Beccarini (CQE) DSGE Summer 2012 6 / 27

Page 7: DSGE Solving

The formulation of eq. (7) represents a dynamic programmingproblem which highlights the recursive structure of the decisionproblem.

In every period t, the planner faces the same decision problem.

Choosing the control variable ut that maximizes the current returnplus the discounted value of the optimum plan from period t + 1onwards.

Since the problem repeats itself every period the time subscriptsbecome irrelevant.

A. Beccarini (CQE) DSGE Summer 2012 7 / 27

Page 8: DSGE Solving

The formulation of eq. (7) represents a dynamic programmingproblem which highlights the recursive structure of the decisionproblem.

In every period t, the planner faces the same decision problem.

Choosing the control variable ut that maximizes the current returnplus the discounted value of the optimum plan from period t + 1onwards.

Since the problem repeats itself every period the time subscriptsbecome irrelevant.

A. Beccarini (CQE) DSGE Summer 2012 7 / 27

Page 9: DSGE Solving

The formulation of eq. (7) represents a dynamic programmingproblem which highlights the recursive structure of the decisionproblem.

In every period t, the planner faces the same decision problem.

Choosing the control variable ut that maximizes the current returnplus the discounted value of the optimum plan from period t + 1onwards.

Since the problem repeats itself every period the time subscriptsbecome irrelevant.

A. Beccarini (CQE) DSGE Summer 2012 7 / 27

Page 10: DSGE Solving

The formulation of eq. (7) represents a dynamic programmingproblem which highlights the recursive structure of the decisionproblem.

In every period t, the planner faces the same decision problem.

Choosing the control variable ut that maximizes the current returnplus the discounted value of the optimum plan from period t + 1onwards.

Since the problem repeats itself every period the time subscriptsbecome irrelevant.

A. Beccarini (CQE) DSGE Summer 2012 7 / 27

Page 11: DSGE Solving

One thus can write eq. (7) as

V (x , z) = maxu

nU(x , u) + βE0

hV (

�x ,�z )io

(8)

where the tilde (�) over x and z denotes the corresponding nextperiod values.They are subject to eq. (2) and (3).

Eq. (8) is said to be the Bellman equation, named after RichardBellman (1957).

If one knows the function V , we then can solve u via the Bellmanequation but in generally, this is not the case.

A. Beccarini (CQE) DSGE Summer 2012 8 / 27

Page 12: DSGE Solving

The typical method in this case is to construct a sequence of valuefunctions by iterating the following equation:

Vj+1(x , z) = maxu

nU(x , u) + βE0

hVj (

�x ,�z )io

(9)

In terms of an algorithm, the method can be described as follows:

A. Beccarini (CQE) DSGE Summer 2012 9 / 27

Page 13: DSGE Solving

Step 1 Guess a di¤erentiable and concave candidate value function, Vj

Step 2. Use the Bellman equation to �nd the optimum u and thencompute Vj+1 according to eq. (9).

Step 3. If Vj+1 = Vj , stop. Otherwise, update Vj and go to step 1.

A. Beccarini (CQE) DSGE Summer 2012 10 / 27

Page 14: DSGE Solving

Step 1 Guess a di¤erentiable and concave candidate value function, VjStep 2. Use the Bellman equation to �nd the optimum u and thencompute Vj+1 according to eq. (9).

Step 3. If Vj+1 = Vj , stop. Otherwise, update Vj and go to step 1.

A. Beccarini (CQE) DSGE Summer 2012 10 / 27

Page 15: DSGE Solving

Step 1 Guess a di¤erentiable and concave candidate value function, VjStep 2. Use the Bellman equation to �nd the optimum u and thencompute Vj+1 according to eq. (9).

Step 3. If Vj+1 = Vj , stop. Otherwise, update Vj and go to step 1.

A. Beccarini (CQE) DSGE Summer 2012 10 / 27

Page 16: DSGE Solving

Under some regularity conditions regarding the function U and F , theconvergence of this algorithm is warranted.

However, the di¢ culty of this algorithm is that in each Step 2, weneed to �nd the optimum u that maximize the right side of eq. (9).

This task makes it di¢ cult to write a closed form algorithm foriterating the Bellman equation.

A. Beccarini (CQE) DSGE Summer 2012 11 / 27

Page 17: DSGE Solving

The Euler Equation

Alternatevely, one starts from the Bellman equation (8).

The �rst-order condition for maximizing the right side of the equationtakes the form:

∂U(x , u)∂u

+ βE

"∂F (x , u, z)

∂u∂V (

�x ,�z )

∂�x

#= 0 (10)

The objective here is to �nd ∂V/∂x . Assume V is di¤erentiable andthus from eq. (8) and exploiting eq. (10) it satis�es

∂V (x , z)∂x

=∂U(x ,G (x , z))

∂x+ βE

"∂F (x ,G (x , z), z)

∂x∂V (

�x ,�z )

∂F

#(11)

This equation is often called the Benveniste-Scheinkman formula.

A. Beccarini (CQE) DSGE Summer 2012 12 / 27

Page 18: DSGE Solving

Assume ∂F/∂x = 0. The above formula becomes

∂V (x , z)∂x

=∂U(x ,G (x , z))

∂x(12)

Substituting this formula into eq (10) gives rise to the Euler equation:

∂U(x , u)∂u

+ βE

"∂F (x , u, z)

∂u∂U(

�x ,�z )

∂�x

#= 0

In economic analysis, one often encounters models, after sometransformation, in which x does not appear in the transition law sothat ∂F/∂x = 0 is satis�ed.

However, there are still models in which such transformation is notfeasible.

A. Beccarini (CQE) DSGE Summer 2012 13 / 27

Page 19: DSGE Solving

Deriving the First-Order Condition from the Lagrangian

Suppose for the dynamic optimization problem as represented byeq.(1) and (2), we can de�ne the Lagrangian L:

L = E0

(∞

∑t=0

βtU(xt , ut )� βt+1λ0t+1 [xt+1 � F (xt , ut , zt )])

(13)

where λt , the Lagrangian multiplier, is a m� 1 vector.

A. Beccarini (CQE) DSGE Summer 2012 14 / 27

Page 20: DSGE Solving

Setting the partial derivatives of L to zero with respect to λt , xt andut will yield eq. (2) as well as

∂U(xt , ut )∂xt

+ βEt

�∂F (xt+1, ut+1, zt+1)

∂xt+1λt+1

�= λt

∂U(xt , ut )∂ut

+ Etλt+1∂F (xt , ut , zt)

∂ut= 0 (14)

In comparison with the Euler equation, we �nd that there is anunobservable variable λt appearing in the system.

Yet, using eq. (13) and (14), one does not have to transform themodel into the setting that ∂F/∂x = 0.

This is an important advantage over the Euler equation.

A. Beccarini (CQE) DSGE Summer 2012 15 / 27

Page 21: DSGE Solving

The Log-linear Approximation Method

Solving nonlinear dynamic optimization model with log-linearapproximation has been proposed in particular by King et al. (1988)and Campbell (1994) in the context of Real Business Cycle models.

In principle, this approximation method can be applied to the�rst-order condition either in terms of the Euler equation or derivedfrom the Langrangean.

Formally, let Xt be the variables,_X the corresponding steady state.

Then,xt � lnXt � ln

_X

is regarded to be the log-deviation of Xt . In particular, 100xt is thepercentage of Xt that it deviates from

_X .

A. Beccarini (CQE) DSGE Summer 2012 16 / 27

Page 22: DSGE Solving

The general idea of this method is to replace all the necessaryequations in the model by approximations, which are linear in thelog-deviation form.

Given the approximate log-linear system, one then uses the method ofundetermined coe¢ cients to solve for the decision rule, which is alsoin the form of log-linear deviations.

The general procedure involves the following steps:

Step 1. Find the necessary equations characterizing the equilibriumlaw of motion of the system. These necessary equations shouldinclude the state equation (2), the exogenous equation (3) and the�rst-order condition derived either as Euler equation (??) or from theLagrangian (13) and (14).

Step 2. Derive the steady state of the model. This requires �rstparameterizing the model and then evaluating the model at itscertainty equivalence form.

Step 3. Log-linearize the necessary equations characterizing theequilibrium law of motion of the system

A. Beccarini (CQE) DSGE Summer 2012 17 / 27

Page 23: DSGE Solving

The general idea of this method is to replace all the necessaryequations in the model by approximations, which are linear in thelog-deviation form.

Given the approximate log-linear system, one then uses the method ofundetermined coe¢ cients to solve for the decision rule, which is alsoin the form of log-linear deviations.

The general procedure involves the following steps:

Step 1. Find the necessary equations characterizing the equilibriumlaw of motion of the system. These necessary equations shouldinclude the state equation (2), the exogenous equation (3) and the�rst-order condition derived either as Euler equation (??) or from theLagrangian (13) and (14).

Step 2. Derive the steady state of the model. This requires �rstparameterizing the model and then evaluating the model at itscertainty equivalence form.

Step 3. Log-linearize the necessary equations characterizing theequilibrium law of motion of the system

A. Beccarini (CQE) DSGE Summer 2012 17 / 27

Page 24: DSGE Solving

The general idea of this method is to replace all the necessaryequations in the model by approximations, which are linear in thelog-deviation form.

Given the approximate log-linear system, one then uses the method ofundetermined coe¢ cients to solve for the decision rule, which is alsoin the form of log-linear deviations.

The general procedure involves the following steps:

Step 1. Find the necessary equations characterizing the equilibriumlaw of motion of the system. These necessary equations shouldinclude the state equation (2), the exogenous equation (3) and the�rst-order condition derived either as Euler equation (??) or from theLagrangian (13) and (14).

Step 2. Derive the steady state of the model. This requires �rstparameterizing the model and then evaluating the model at itscertainty equivalence form.

Step 3. Log-linearize the necessary equations characterizing theequilibrium law of motion of the system

A. Beccarini (CQE) DSGE Summer 2012 17 / 27

Page 25: DSGE Solving

The general idea of this method is to replace all the necessaryequations in the model by approximations, which are linear in thelog-deviation form.

Given the approximate log-linear system, one then uses the method ofundetermined coe¢ cients to solve for the decision rule, which is alsoin the form of log-linear deviations.

The general procedure involves the following steps:

Step 1. Find the necessary equations characterizing the equilibriumlaw of motion of the system. These necessary equations shouldinclude the state equation (2), the exogenous equation (3) and the�rst-order condition derived either as Euler equation (??) or from theLagrangian (13) and (14).

Step 2. Derive the steady state of the model. This requires �rstparameterizing the model and then evaluating the model at itscertainty equivalence form.

Step 3. Log-linearize the necessary equations characterizing theequilibrium law of motion of the system

A. Beccarini (CQE) DSGE Summer 2012 17 / 27

Page 26: DSGE Solving

The general idea of this method is to replace all the necessaryequations in the model by approximations, which are linear in thelog-deviation form.

Given the approximate log-linear system, one then uses the method ofundetermined coe¢ cients to solve for the decision rule, which is alsoin the form of log-linear deviations.

The general procedure involves the following steps:

Step 1. Find the necessary equations characterizing the equilibriumlaw of motion of the system. These necessary equations shouldinclude the state equation (2), the exogenous equation (3) and the�rst-order condition derived either as Euler equation (??) or from theLagrangian (13) and (14).

Step 2. Derive the steady state of the model. This requires �rstparameterizing the model and then evaluating the model at itscertainty equivalence form.

Step 3. Log-linearize the necessary equations characterizing theequilibrium law of motion of the system

A. Beccarini (CQE) DSGE Summer 2012 17 / 27

Page 27: DSGE Solving

The general idea of this method is to replace all the necessaryequations in the model by approximations, which are linear in thelog-deviation form.

Given the approximate log-linear system, one then uses the method ofundetermined coe¢ cients to solve for the decision rule, which is alsoin the form of log-linear deviations.

The general procedure involves the following steps:

Step 1. Find the necessary equations characterizing the equilibriumlaw of motion of the system. These necessary equations shouldinclude the state equation (2), the exogenous equation (3) and the�rst-order condition derived either as Euler equation (??) or from theLagrangian (13) and (14).

Step 2. Derive the steady state of the model. This requires �rstparameterizing the model and then evaluating the model at itscertainty equivalence form.

Step 3. Log-linearize the necessary equations characterizing theequilibrium law of motion of the system

A. Beccarini (CQE) DSGE Summer 2012 17 / 27

Page 28: DSGE Solving

The following building block for such log-linearization can be used:

Xt �_Xext (15)

ext+ayt � 1+ xt + ayt (16)

xtyt � 0 (17)

Step 4. Solve the log-linearized system for the decision rule (which isalso in log-linear form) with the method of undetermined coe¢ cients.

A. Beccarini (CQE) DSGE Summer 2012 18 / 27

Page 29: DSGE Solving

The following building block for such log-linearization can be used:

Xt �_Xext (15)

ext+ayt � 1+ xt + ayt (16)

xtyt � 0 (17)

Step 4. Solve the log-linearized system for the decision rule (which isalso in log-linear form) with the method of undetermined coe¢ cients.

A. Beccarini (CQE) DSGE Summer 2012 18 / 27

Page 30: DSGE Solving

The Ramsey ProblemThe Model

Ramsey (1928) posed a problem of optimal resource allocation, whichis now often used as a prototype model of dynamic optimization.

Let Ct denote consumption, Yt output and Kt the capital stock.

Assume that the output is produced by capital stock and it is eitherconsumed or invested, that is, added to the capital stock. Formally,

Yt = AtK αt (18)

Yt = Ct +Kt+1 �Kt (19)

A. Beccarini (CQE) DSGE Summer 2012 19 / 27

Page 31: DSGE Solving

where α 2 (0, 1) and At is the technology which may follow an AR(1)process:

At+1 = a0 + a1At + εt+1 (20)

Assume εt to be i .i .d . Equation (18) and (19) indicates that wecould write the transition law of capital stock as

Kt+1 = AtK αt � Ct (21)

Note that one has assumed here that the depreciation rate of capitalstock is equal to 1.

This is a simpli�ed assumption by which the exact solution iscomputable.

A. Beccarini (CQE) DSGE Summer 2012 20 / 27

Page 32: DSGE Solving

The representative agent is assumed to �nd the control sequencefCtg∞

t=0 such that

maxE0

"∞

∑t=0

βt lnCt

#(22)

given the initial condition (K0,A0).

A. Beccarini (CQE) DSGE Summer 2012 21 / 27

Page 33: DSGE Solving

The Ramsey ProblemThe Euler Equation

The �rst task is to transform the model into a setting so that thestate variable Kt does not appear in F (�).This can be done by assuming Kt+1 (instead of Ct) as model�sdecision variable.

To achieve a notational consistency in the time subscript, one maydenote the decision variable as Zt . Therefore the model can berewritten as

maxE0

"∞

∑t=0

βt ln�AtK α

t �K t+1�#

(23)

subject toKt+1 = Zt

A. Beccarini (CQE) DSGE Summer 2012 22 / 27

Page 34: DSGE Solving

Note that here one has used (21) to express Ct in the utility function.

Also note that in this formulation the state variable in period t is stillKt .

Therefore ∂F/∂x = 0 and ∂F/∂u = 1. The Bellman equation in thiscase can be written as

V (Kt ,At ) = maxKt+1

fln(AtK αt �Kt+1) + βEt [V (Kt+1,At+1)]g (24)

The necessary condition for maximizing the right hand side of theBellman equation (2.12) is given by

�1AtK α

t �Kt+1+ βEt

�∂V (Kt+1,At+1)

∂Kt+1

�= 0 (25)

A. Beccarini (CQE) DSGE Summer 2012 23 / 27

Page 35: DSGE Solving

Meanwhile applying the Benveniste-Scheinkman formula,

∂V (Kt ,At )∂Kt

=αAtK α�1

t

AtK αt �Kt+1

(26)

Substituting eq. (26) into (25) allows us to obtain the Euler equation:

�1AtK α

t �Kt+1+ βEt

"αAt+1K α�1

t+1

At+1K αt+1 �Kt+2

#= 0

which can further be written as

� 1Ct+ βEt

"αAt+1K α�1

t+1

At+1K αt+1 �Kt+2

#= 0 (27)

This Euler equations (27) along with (21) and (20) determine thetransition sequences of {Kt+1}∞

t=1, {At+1}∞t=1 and {Ct}

∞t=0 given the

initial condition K0 and A0.

A. Beccarini (CQE) DSGE Summer 2012 24 / 27

Page 36: DSGE Solving

The First-Order Condition Derived from the Lagrangian

Next, derive the �rst-order condition from the Lagrangian. De�ne theLagrangian:

L = E0

(∞

∑t=0

βt lnCt �∞

∑t=0Et�βt+1λ0t+1(Kt+1 � At ,K α

t + Ct )�)

Setting to zero the derivatives of L with respect to λt , Ct and Kt , oneobtains (21) as well as

1/Ct � βEtλt+1 = 0 (28)

βEtλt+1αAtK α�1t = λt (29)

These are the �rst-order conditions derived from the Lagrangian.

Substituting out for λ one obtains the Euler equation (27).

A. Beccarini (CQE) DSGE Summer 2012 25 / 27

Page 37: DSGE Solving

The Ramsey ProblemThe Exact Solution and the Steady States

The exact solution for this model which could be derived from thestandard methods described below can be written as

Kt+1 = αβAtK αt (30)

This further implies from (21) that

Ct = (1� αβ)AtK αt (31)

Given the solution paths for Ct and Kt+1, one is then able to derivethe steady state.

One steady state is on the boundary, that is_K = 0 and

_C = 0.

A. Beccarini (CQE) DSGE Summer 2012 26 / 27

Page 38: DSGE Solving

To obtain a more meaningful interior steady state, we take logarithmfor both sides of (30) and evaluate the equation at its certaintyequivalence form:

logKt+1 = log(αβAt ) + αlogKt (32)

At the steady state, Kt+1 = Kt =_K . Solving (32) for log

_K , we

obtain log_K = log (αβA)

1�α .

Therefore,

_K = (αβ

_A)1/(1�α) (33)

Given_K ,

_C is resolved from (21):

_C =

_A_K

α�

_K (34)

A. Beccarini (CQE) DSGE Summer 2012 27 / 27