Economics 2010c: Lecture 1 Introduction To Dynamic Programming
Economics 2010c: Lecture 1 Introduction To Dynamic Programming
9/02/2014
Outline of my half-semester course:
Remark 1.1 When I omit time subscripts, this implies that an equation holds
for all relevant values of . In the statement above, +1 ∈ Γ() implies,
+1 ∈ Γ() for all = 0 1 2
Example 1.1 Optimal growth with log utility and Cobb-Douglas technology:
∞
X
sup ln()
{}∞
=0 =0
subject to the constraints, ≥ 0 + +1 = and 0 given.
Translate this problem into Sequence Problem notation by (1) eliminating re-
dundant variables and (2) introducing constraint correspondence Γ
• Note that any old function won’t solve the Bellman Equation.
• We haven’t yet demonstrated that there exists even one function (·) that
will satisfy the Bellman equation.
• We will show that the (unique) value function defined by the Sequence
Problem is also the unique solution to the Bellman Equation.
A solution to the Sequence Problem is also a solution to the Bellman Equation.
∞
X
(0) = sup ( +1)
+1∈Γ() =0
⎧ ⎫
⎨ ∞
X ⎬
= sup (0 1) + ( +1)
+1∈Γ()
⎩ ⎭
=1
⎧ ⎫
⎨ ∞
X ⎬
= sup (0 1) + −1 ( +1)
+1∈Γ()
⎩ ⎭
=1
⎧ ⎫
⎨ ∞
X ⎬
= sup (0 1) + sup (+1 +2)
1∈Γ(0)
⎩ +1 ∈Γ() =0
⎭
• Three methods
• Method 1 today.
• Guess a function (), and then check to see that this function satisfies
the Bellman Equation at all possible values of
• For our growth example, guess that the solution of the growth problem
takes the form:
() = + ln()
where and are constants for which we need to find solutions.
() = + ln()
is a solution to the Bellman Equation.
4 Search and optimal stopping
0.9
converges to 1 as
discount rate goes to 0
0.8
0.7
optimal threshold
0.6
0.5
0.4
0.3
converges to 0 as
discount rate goes to ∞
0.2
0.1
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
discount rate = -ln(delta)
Outline of today’s lecture: