0% found this document useful (0 votes)
6 views30 pages

Lecture15_ValueFunctionIteration

The document outlines the course structure for Macro II, focusing on value function iteration and its application in solving models like the neoclassical growth model and RBC model. It discusses methods such as grid search, interpolation, and the challenges of finding equilibrium and policy functions in complex models. The document also emphasizes the importance of parameter values, grid selection, and convergence in the iterative process of value function iteration.

Uploaded by

eodnjs020309
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views30 pages

Lecture15_ValueFunctionIteration

The document outlines the course structure for Macro II, focusing on value function iteration and its application in solving models like the neoclassical growth model and RBC model. It discusses methods such as grid search, interpolation, and the challenges of finding equilibrium and policy functions in complex models. The document also emphasizes the importance of parameter values, grid selection, and convergence in the iterative process of value function iteration.

Uploaded by

eodnjs020309
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 30

Macro II

Professor Griffy

UAlbany

Spring 2024
Announcements

I Today: continue solutions methods: value function iteration.


I Using:
1. Grid search;
2. Interpolation (grid search with functions filling in between
nodes).
I Go through examples with neoclassical growth model.
I Homework assignment: do same with RBC model (HW5).
I Midterm grades by end of the week (my bad).
Solving a Model

I When we say “solve a model” what do we mean?


1. Find the equilibrium of the model.
2. Generally, determine the policy functions.
3. Determine the transition equations given the individual and
aggregate state.
4. i.e., aggregate up the policy functions and determine prices
given distributions.
I Generically, this is hard: many states, non-linear decision
rules, etc.
Solving a Model
I Generically, this is hard: many states, non-linear decision
rules, etc.
I Much of quantitative macro is about finding “shortcuts”
without sacrificing accuracy of solution (some we have seen):
1. Planner’s problem: use welfare theorems to remove prices from
problem.
2. Rational expectations & complete markets: Aggregate worker
decision rules by assuming they make same predictions about
future prices, and face same consumption risk.
3. Exogenous wage distribution/prices: agents do not respond to
decisions of other agents.
4. Block Recursive Equilibrium: agents face an equilibrium with
individual prices, i.e., no need to know distribution.
I Linearization: assume the economy is close enough to
steady-state that transition equations (i.e., policy functions)
are close to linear within small deviations.
I Value function iteration: discretize state space and solve
model at “nodes” in state space.
Neoclassical Growth Model

I Problem:

V (k) = max u(c) + V (k 0 ) (1)


0k
c + k 0 = F (k) + (1 )k (2)
1
I Assume power utility: u(c) = c 1 1
I Cobb-Douglas Production: F (k) = k ↵
Value Function Iteration

I Basic approach to value function iteration:


1. Create grid of points for each dimension in state-space.
2. Specify terminal condition Vt for t = T at each point in
state-space.
3. Solve problem of agent in period T 1:
Vt (y) = maxx u(c(x)) + E [Vt+1 (x)].
4. x is policy function, which yields the largest value from
{x1 , ..., xN }, where N is the number of grid points.
5. Check to see if function has converged, i.e.,
|Vt Vt+1 | < errtol
6. Update Vt+1 = Vt
I Interpolation: same idea, but functions used to fill in between
grid points.
Parameter Values

I Before we can solve the model (or write down grids) we need
parameter values.
I Pick reasonable ones from the literature:
I ↵ = 0.3 (roughly capital share)
I = 2 (standard risk aversion)
I = 0.1 (annual depreciation 10%)
I = 0.96 (annual interest rate ⇡ 4.2%)
I If we were estimating this model: we would evaluate the
performance of the model given these parameters.
I i.e., how does it fit the data if we use this set of parameters.
Grids
I Want: smallest grids reasonable.
I Find k ⇤ , pick grids around this.
I Euler Equation
1
u 0 (c) = [↵k ↵ + (1 )]u 0 (c 0 ) (3)

I In steady-state, c = c 0 = c ⇤
1
! u 0 (c ⇤ ) = [↵k ⇤↵ + (1 )]u 0 (c ⇤ ) (4)
1
1 = [↵k ⇤↵ + (1 )] (5)
1 1 1
( )↵ 1 = k⇤ (6)
↵ ↵
(7)

I For our parameter values, k ⇤ = 2.92.


I Pick grids st k, k 0 2 [0.66 ⇥ k ⇤ , 1.5 ⇥ k ⇤ ]
I Arbitrary, probably larger than needed.
Neoclassical Growth Model

I Problem:

V (k) = max u(c) + V (k 0 ) (8)


0k
c + k 0 = F (k) + (1 )k (9)
1
I Assume power utility: u(c) = c 1 1
I Cobb-Douglas Production: F (k) = k ↵
I k, k 0 2 {k1 , ..., kN }
I V0 =? Safest bet to set it to zero at all k.
Value Function First Iteration
I Intuitively, take as given capital today (k̄), choose capital in
the future that maximizes value.
I Problem:

V (k̄) = max u(c) + V (k 0 ) (10)


k 0 2k1 ,...,kN

c + k 0 = F (k̄) + (1 )k̄ (11)

I That is, policy function is ki where i is the index of the


optimal policy from the following:

u(F (k̄) + (1 )k̄ k1 ) + ⇥0 (12)


u(F (k̄) + (1 )k̄ k2 ) + ⇥0 (13)
... (14)
u(F (k̄) + (1 )k̄ kN ) + ⇥0 (15)
Value Function First Iteration
I Value of Vt+1 (k 0 ) given k = k̄ (x-axis is num. of grid pts.):

I What is optimal choice?


Value Function First Iteration

I Now, check if problem has converged.


I What does this mean?
I The value in the current state is not changing over time.
I i.e., Vt (k) ⇡ Vt+1 (k).
I First iteration: it won’t be.
I What do we do now?
I Update the continuation value:
I Vt+1 = Vt for all k
I Solve same problem again.
Value Function Second Iteration

I Solved for V (k 0 ) in previous iteration.


I Again, faced with maximization problem given capital k̄ today:

V (k̄) = max u(c) + V (k 0 ) (16)


k 0 2k1 ,...,kN

c + k 0 = F (k̄) + (1 )k̄ (17)

I Note that the continuation value is not zero!

u(F (k̄) + (1 )k̄ k1 ) + V (k1 ) (18)


u(F (k̄) + (1 )k̄ k2 ) + V (k2 ) (19)
... (20)
u(F (k̄) + (1 )k̄ kN ) + V (kN ) (21)
Value Function Second Iteration
I Value of Vt+1 (k 0 ) given k = k̄ (x-axis is num. of grid pts.):

I What is optimal choice?


Value Function Second Iteration

I We check again to see if it has converged.


I is Vt (k) ⇡ Vt+1 (k).
I What do we do now?
I Update the continuation value:
I Vt+1 = Vt for all k
I Solve same problem again.
I Keep doing this until the difference is very small.
Great, we’re done!

I Not so fast: this isn’t very accurate.


I Very slow if we have large numbers of states & grid points
(scales exponentially).
Fundamental Problem

I The reason we need to use a computer to solve this problem is


that we don’t know the function V (k).

V (k) = max u(c) + V (k 0 ) (22)


k
0

c + k 0 = F (k) + (1 )k (23)

I What is we approximate V (k) with other functions?


I Some useful properties we can pick these functions to have:
I Continuous
I Differentiable
I If our approximation is accurate enough, we can drop some
grid points!
Interpolation
I Again, take capital today as given k = k̄. Grid search:
V (k̄) = max u(c) + V (k 0 ) (24)
k 0 2k1 ,...,kN

c + k 0 = F (k̄) + (1 )k̄ (25)


I Optimal policy is the index largest of:
u(F (k̄) + (1 )k̄ k1 ) + V (k1 ) (26)
... (27)
u(F (k̄) + (1 )k̄ kN ) + V (kN ) (28)
I Call interpolated function V̂ (k). Then,
V (k̄) = max u(c) + V̂ (k 0 ) (29)
0 k
c + k 0 = F (k̄) + (1 )k̄ (30)
I Where k0 solves
@ V̂
u 0 (F 0 (k̄) + (1 )k̄ k 0) = (31)
@k 0
Updating

I We do exactly the same thing as before:

V (k̄) = u(c(k ⇤ )) + V (k ⇤ ) (32)


0 0

I For each k̄. Then, we check the convergence criteria:

|Vt Vt+1 | < errtol (33)

I How do we create the function V̂ (k)?


I “Connect the dots” of Vt (k) between all capital levels in order.
Interpolation

I Left is value function for grid search. Right is for (linearly)


interpolated function:
Interpolation

I In constructing our function V̂ (k), we need to choose an


interpolation scheme.
I Roughly, what order function do we believe will be accurate
enough to mimick the value function:
I First-order (linear)
I Third-order (cubic)
I Fifth-order (quintic)
I Some other useful interpolation routines:
I PCHIP (piecewise cubic hermite interpolating polynomial):
shape-preserving (not “wiggly”) continuous 3rd order spline
with continuous first derivative.
Interpolation

I Choice DOES matter:


Polynomial Interpolation

I Suppose we have a function y = f (x) for which we know the


values of y at {x1 , ..., xn }.
I Then, the nth-order polynomial approximation to this function
f is given by

f (x) ⇡ Pn (x) = an x n + an 1x
n 1
+ ... + a1 x + a0 (34)

I Then, we have a linear system with n coefficients.


I We could write this as y = X . Look familiar?
Polynomial Interpolation

I We solve
2 32 3 2 3
1 x0 x02 ... x0n a0 y0
6 .. .. .. .. .. 7 6 .. 7 = 6 .. 7
4. . . . . 54 . 5 4 . 5 (35)
1 xn xn ... xnn
2 an yn

I For a0 , ..., an
I What’s the example we are all familiar with? Linear
regression: y = ↵ + X .
I In practice, this is computationally expensive, but this is the
intuition.
Great, we’re done!

I Not so fast: how do we handle expected values?


I Depends on expectation.
I Need an accurate way to perform numerical integration.
Stochastic Neoclassical Growth Model

I Problem:

V (z, k) = max u(c) + E [V (z 0 , k 0 )] (36)


0k
z
c + k 0 = e F (k) + (1 )k (37)
z = ⇢z + ✏
0
(38)
✏ ⇠ N(0, ✏) (39)
1
I Assume power utility: u(c) = c 1 1
I Cobb-Douglas Production: F (k) = k ↵
I Make sure your process for z stays non-negative.
Expectations with AR(1) Process
I Approximate a continuous AR(1) process with a markov
process:
I Create grid of potential z values {z1 , ..., zN }, approximate
AR(1) process through transition probabilities.

E [zt ] = E [⇢zt 1 + ✏t ] = 0 (40)


V [zt ] = V [⇢zt 1 + ✏ t ] = ⇢2 2
z + 2
✏ (41)
2 2 2
! (1 ⇢ ) z = ✏ (42)

I Define this process G(¯


✏)
I Tauchen (1986):
2
zN = m( ✏
) (43)
1 ⇢2
z1 = zN (44)
z2 , ..., zN 1 equidistant (45)
Expectations with AR(1) Process

I Tauchen (1986):
2
zN = m( ✏
) (46)
1 ⇢2
z1 = zN (47)
z2 , ..., zN 1 equidistant (48)

I Create an nxn transition matrix ⇧ using probabilities

⇡ij = G(zj + d/2 ⇢zi ) G(zj d/2 ⇢zi ) (49)


⇡i1 = G(z1 + d/2 ⇢zi ) (50)
⇡iN = 1 G(zN + d/2 ⇢zi ) (51)
Expectations Generally

I Expected values also need to be calculated carefully.


I Continuation value from before:

E [V (z, k 0 )] (52)

I If not an AR(1)/markov process, need to approximate integral.


I Generically, pick function f and weights wi
Z b N
X
E [V (z, k 0 )] = f (x)dx ⇡ wi f (xi ) (53)
a i=1

I xi may be known or picked optimally.


I We will return to this in the future.
Next Time

I Calibration and RBC extensions.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy