Mathii at Su and Sse: John Hassler Iies, Stockholm University February 25, 2005
Mathii at Su and Sse: John Hassler Iies, Stockholm University February 25, 2005
Mathii at Su and Sse: John Hassler Iies, Stockholm University February 25, 2005
John Hassler
IIES, Stockholm University
February 25, 2005
1 Introduction
1
This course is about dynamic systems, i.e., systems that evolve over time.
The analysis of the dynamic evolvement of economic systems is of core im-
portance in many areas of economics. Growth, business cycles, asset pricing,
and dynamic game theory are just a few examples.
1.1 Solving a simple dynamic system
Very often, our economic models provide a dierence or dierential equation
for the endogenous variables. Take a very simple example; an arbitrage asset
pricing model. Suppose there is a safe asset, a bond, that provides a return :
every period. Suppose a share, giving rights to dividend ow of d per period.
is introduced. Now, arbitrage theory says that the share should also yield
a return : in equilibrium. Dening the price on the share as j, arbitrage
theory thus implies,
j
t+1
+d
j
t
= 1 +:. (1)
This is a simple dierence equation,
j
t+1
= (1 +:) j
t
d. (2)
Note that we j
t
is an endogeneous variable in this system while : and d
are exogeneous. This would be the case also if d and : varies over time, cases
we will study below.
One straightforward way of solving it is to substitute forward or backward,
e.g., noting that
j
t
= (1 +:) j
t1
d (3)
= (1 +:) ((1 +:) j
t2
d) d (4)
= (1 +:)
2
j
t2
d (1 + (1 +:)) (5)
and so on. A more general approach is to rst characterize all possible paths
consistent with the law-of-motion. Here, this is quite simple. You will learn
that set of possible paths is
j
t
= c (1 +:)
t
+
d
:
(6)
1
I am grateful for comments and corrections provided by Dirk Niepelt.
1
for any constant c. As we see, there is an innite number of solutions, i.e.,
we need more information. If, for example, we know that the value of j
0
, we
can solve for the constant
c = j
0
d
:
(7)
j
t
=
_
j
0
d
:
_
(1 +:)
t
+
d
:
. (8)
In nance, the solution j =
o
v
is called the fundamental solution, and we
see that if : 0. the solution explodes as t goes to innity if j
0
,=
o
v
. This
gives us another way of solving the dierence equation. Suppose, we have
reason to believe that the solution should remain bounded. Then, if : 0,
the only solution left is when c = 0. Note that (1 +:) is called the root of
the system. Note, the importance of whether the root is bigger or smaller
than unity (in absolute values).
We will also work in continuous time. Then, I usually use put the time
variable in parenthesis and use the dot symbol to indicate time derivatives.
In continuous time, non-existence of arbitrage means that capital gains, i.e.,
the change in the price per unit of time plus dividends per unit of time should
equal the opportunity cost, i.e., interest rate on the price of the assets. Thus,
non-existence of arbitrage implies
_ j (t) +d = :j (t) . (9)
This is a simple linear rst-order dierential equation. The set if solution is:
j (t) = cc
vt
+
d
:
. (10)
Also here, we have a term that is explosive if : 0.
Later in the course, we will learn how to solve more complicated dynamic
systems, involving, e.g., several endogenous variables and varying parame-
ters.
1.2 Two approaches to Dynamic Optimization
The second part of the course, is to solve maximization problems in dynamic
systems. Suppose there is a potentially innite set of paths r (t)
T
0
. each
denoting a particular continuous function r (t) for t [0. 1] . Suppose also
that we can evaluate them, i.e., they give dierent payos. Then, we will
learn how to derive dierence or dierential equations, that are necessarily
satised for optimal paths. If we can solve these equations, we can nd the
optimal path. We will use two approaches in this course.
2
1.2.1 Dynamic Programming (Bellman).
Suppose we want to nd an optimal investment plan in discrete time and let
r
t
denote the stock of capital at time t. Also, let n
t
denote investments and
assume
r
t+1
= q(r
t
. n
t
). (11)
which we call the law-of-motion for r
t
.
Each period, the payo is given by 1 (t. r
t
. n
t
) and the problem is to solve
max
a
T+1
0
,&
T
0
T
t=0
1 (t. r
t
. n
t
) . (12)
s.t. r
t+1
= q(r
t
. n
t
)\t. (13)
r
0
= r. (14)
Note that this is a dynamic problem; The choice of investment at time t.
n
t
may aect payos in many future period. First, the payo at t. 1 (t. r
t
. n
t
)
is aected directly and also the next periods payo since
1 (t + 1. r
t+1
. n
t+1
) = 1 (t + 1. q(r
t
. n
t
). n
t+1
) .
Furthermore, also payos further away can be aected since, for example,
r
t+2
= q(r
t+1
. n
t+1
) = q(q(r
t
. n
t
). n
t+1
), aecting the payo in period t + 2.
The choice of n
t
must thus take into account all future payo relevant eects.
Sometimes the dynamic problem degenerates in the sense that this dy-
namic link breaks. To illustrate this, let us simplify and take the example
where 1 (t. r
t
. n
t
) =
_
1
1+v
_
t
(,
t
(r
t
) n
t
) and q (r
t
. n
t
) = (1 o) r
t
n
t
. We
can interpret o as the rate of capital depreciation. If o = 1, we have r
t+1
= n
t
and by substituting from the law-of-motion, we can write the problem as
max
a
T
1
T
t=0
_
1
1 +:
_
t
(,
t
(r
t
) r
t+1
) . (15)
r
0
= r. (16)
The rst-order condition for choosing r
t
for any t 0 is ,
0
t
(r
c
) = 1+:, so
to know how much to invest in period t. we only need to know the marginal
productivity of capital next period, i.e., we maximize period-by-period. With
o < 1. we cannot do this. Then dynamic programing is a handy way to attack
the problem. We will use the Bellmans principle of optimality, saying that
there exists a sequence of functions \
t
(r
t
) such that
\
t
(r
t
) = max
&
t
1 (t. r
t
. n
t
) +\
t+1
(q(r
t
. n
t
)) (17)
3
and where
\
t
(r
t
) = max
a
T+1
t
,&
T
t
T
c=t
1 (:. r
c
. n
c
) (18)
s.t. r
t+1
= q(r
t
. n
t
) (19)
r
t
given. (20)
\
t
(r
t
) is called the value function and r
t
a state variable. We nd that
through the Bellman principle, we have split up the problem into a sequence
of one period problems. We can then solve the problem more easily since the
rst-order condition for maximizing the RHS of (17)
1
&
(t. r
t
. n
t
) +\
0
t+1
(q(r
t
. n
t
)) q
&
(r
t
. n
t
) = 0. (21)
implicitly yields a system of dierence equations in r
t
and n
t
that we may
be able to solve.
1.2.2 Optimal control (Pontryagin)
The other way of solving dynamic optimization problems that we will use
is called Optimal Control. We will use it for continuous time problems.
Suppose we want to solve
max
a(t)
T
0
,&(t)
T
0
_
T
0
1 (t. r (t) . n(t)) dt (22)
:.t. _ r (t) = q (r (t) . n(t)) (23)
r (0) = r
0
(24)
r (1) = r
T
. (25)
Then, Pontryagins maximum principle says that for each point in time,
the optimal control, call it n
(t) . satises
n
i=1
, (t
i
) t. (2)
i.e., by summing rectangles of base t and height , (t
i
) .
f t ( )
t
a b
F b a ( , )
f t ( )
t
t
2
t
3
t
1
t
4
t
5
t
6
If the function , (t) is bounded and dierentiable and we let the number
of sub-intervals (:) increase and therefore size of each one decrease, the
approximation (2) becomes perfect as : . To see this, note that we can
approximate the error in the approximation by the triangle
)(t
i+1
))(t
i
)
2
t.
Since , (t
i+1
) - , (t
i
)+,
0
(t
i
) t, by a rst order Taylor approximation, each
triangle can be approximated by
,
0
(t
i
) t
2
t =
,
0
(t
i
)
2
t
2
. (3)
Furthermore, the sum of absolute errors,
a1
i=1
,
0
(t
i
)
2
t
2
_ max
i
[,
0
(t
i
)[
1
2
a1
i=1
t
2
= (4)
max
i
[,
0
(t
i
)[
1
2
(: 1)
_
/ c
: 1
_
2
= max
i
[,
0
(t
i
)[
1
2
(/ c)
2
: 1
. (5)
Clearly, this goes to zero as : goes to innity. If the function , (t) has
discontinuities or is non-dierentiable somewhere, we can do the summation
for each interval where , (t) is continuous.
6
We conclude that we should think of the integral as a sum of rectangles,
each with height given by the function we are integrating and base dt.
If we can handle integration over compact intervals, we can also dene
integrals over unbounded intervals by taking the limit value as integration
limits (c, and/or /) approach innity. Of course, this limit does not always
exist. If it does, we use the notation
lim
o!1
b
_
o
, (t) dt =
b
_
1
, (t) dt (6)
lim
b!1
b
_
o
, (t) dt =
1
_
o
, (t) dt (7)
2.2.1 Fundamental theorem of calculus
An integral is a generalization of a sum, and a derivative is a generalization
of a dierence. The following theorem links these concepts. The rst part of
the Fundamental Theorem says that if
1 (/) =
b
_
o
, (t) dt (8)
then
1
0
(/) =
J
J/
b
_
o
, (t) dt = ,(/). (9)
(Convince yourself that this is reasonable by making a drawing.) A func-
tion 1(t) that has the property that 1
0
(t) = ,(t) is called a primitive for
,. Clearly, if 1(t) is a primitive for ,(t), then also 1(t) plus any constant,
is a primitive. Thus, the rst part of the fundamental theorem provides a
necessary, but not sucient condition for nding the area.
Now let us turn to he second part of the theorem, which provides a way
of calculating the exact value of an integral like in (1).
Result 2 Let 1 be any primitive of ,, then
b
_
o
, (t) dt = 1 (/) 1 (c) = [1 (t)]
b
o
. (10)
7
Sometimes it is easy to nd the primitive, like in the cases ,(t) = t
o
. 1,(ct+
/) or c
ot
, in which case 1(t) is t
o+1
,(c+1). ln(ct +/),c. or c
ot
,c respectively.
In other cases like :
0.5
c
a
2
, the primitive cannot be expressed by using stan-
dard algebraic functions. This does not mean that the primitive does not
exist. In the case :
0.5
c
a
2
, a primitive is the cumulative normal distribution,
which certainly exists.
Since the integral is a (kind of) sum, it is straightforward to understand
Liebniz rule.
Result 3 If , is dierentiable, then
J
Jr
b
_
o
, (r. t) dt =
b
_
o
J
Jr
, (r. t) dt. (11)
J
Jc
b
_
o
, (r. t) dt = , (r. c) . (12)
J
J/
b
_
o
, (r. t) dt = , (r. /) . (13)
f t x ( , )
t
a b
f t x
f t x
x
dx ( , )
( , )
+
a da + b db +
2.2.2 Change of variables
Suppose that = q(r), then the rules of dierentiation gives d = q
0
(r)dr.
Now let us calculate the area under some function ,() but integrating over
8
r. In a sense, this is like changing the scale of the horisontal axis. To do this,
we simply substitute = q(r). d = q
0
(r)dr. To get the integration limits,
we note that if r = c then = q (c) .
Result 4 If, = q(r) then
j(b)
_
j(o)
, () d =
b
_
o
, (q (r)) q
0
(r)dr. (14)
Sometimes a variable substitution makes integration simpler. Take the
following example;
2
_
1
_
r
2
+ 1
_
10
2rdr. (15)
Now dene = r
2
+ 1 = q (r) implying
d = 2rdr. q (1) = 2. q (2) = 5 (16)
2
_
1
_
r
2
+ 1
_
10
2rdr =
5
_
2
10
d (17)
2.2.3 Integration by parts
Another important result, which we will use a lot, is the following rule for
integration by parts. Assume we want to integrate a product of two func-
tions of r. i.e., n(r)(r). Then, let l(r) be a primitive of n. The rule for
dierentiation of products says
d (l (r) (r))
dr
= n(r) (r) +l (r)
0
(r) (18)
n(r) (r) =
d (l (r) (r))
dr
l (r)
0
(r) . (19)
Then, we can integrate over r on both sides, giving
Result 5
_
b
o
n(r) (r) dr =
_
b
o
d (l (r) (r))
dr
dr
_
b
o
l (r)
0
(r) dr (20)
= [l (r) (r)]
b
o
_
b
o
l (r)
0
(r) dr (21)
9
2.2.4 Double integration
As we know, the integral is an area under a curve ,(t) over an interval [c. /]
in the t dimension. Similarly, we can compute the volume under a plane with
a height ,(t. :) over an area in the t. : dimension. For example, let ,(t. :) be
t : and integrate over a rectangle with sides [c
t
. /
t
] and [c
c
. /
c
].
t
s
The function, . = t:
The volume under the plane , (t. :) is then given by
_
b
s
o
s
_
b
t
o
t
, (t. :) dtd: =
_
b
s
o
s
__
b
t
o
t
t:dt
_
d: = (22)
_
b
s
o
s
_
_
t
2
2
:
_
b
t
o
t
_
d: =
_
b
s
o
s
:
_
/
2
t
c
2
t
2
_
d: = (23)
_
/
2
t
c
2
t
2
__
:
2
2
_
b
s
o
s
=
_
/
2
t
c
2
t
2
__
/
2
c
c
2
c
2
_
(24)
Note, that we are rst calculating the area under ,(t. :) over the interval
[c
t
. /
t
]for each :. This area is a function of : which we then integrate over
the interval [c
c
. /
c
] in the :-dimension. If we integrate over other areas than
rectangles, the limits of integration are not independent. For example, we
may integrate over a triangle where /
t
= :. Simplify and set c
t
= c
c
= 0.
Then, we have
_
b
s
0
_
c
0
, (t. :) dtd: =
_
b
s
0
__
c
0
t:dt
_
d: =
_
b
s
0
__
t
2
2
:
_
c
0
_
d: = (25)
_
b
s
0
:
:
2
2
d: =
_
:
4
8
_
b
s
0
=
/
4
c
8
. (26)
10
2.3 Complex numbers and trigonometric functions
The formula for the solution to a quadratic equation cr
2
+/r +c = 0, is
r =
/
2c
_
(/
2
4cc)
2c
. (27)
If /
2
4cc < 0 . The solution is not in the set of real numbers. The
introduction of complex numbers intended to extend the space of solutions
to accommodate such cases and it turns out that for all numbers in the
extended space, we can always nd solutions to such equations. We can
think of complex numbers as two-dimension objects . = (r. ). The rst
number, r, provides the value in the standard real dimension, while the
second provides the value in the other dimension, called imaginary. Thus,
real numbers are a special sub-set of complex number such that = 0.
Real x
Imaginary
y
i = (0,1)
1 = (1,0)
z = (x,y)
The rules for addition and multiplication with complex numbers are the
following
.
1
+.
2
= (r
1
+r
2
.
1
+
2
) (28)
.
1
.
2
= (r
1
r
2
2
. r
1
2
+r
2
1
) (29)
11
Using these rules, we can compute the square of a complex number only
consisting of a unitary imaginary part, i.e., . = i = (0. 1) .
(0. 1)
2
= (0 1. 0 + 0) = (1. 0) = 1 (30)
We thus established the important result
Result 6 r = i is a solution to the equation r
2
= 1.
Using the rules above, it is also straightforward to showthat an alternative
we of writing . is given by the following
. = (r. ) = (r. 0) + (0. ) = r (1. 0) + (0. 1) = r +i. (31)
We should also note that
(r. ) (r. ) =
_
r
2
+
2
. r +r
_
=
_
r
2
+
2
. 0
_
= [(r. )[
2
(32)
The numbers (r. ) and (r. ) are called complex conjugates and the
value [r. [ is called the modulus of (r. ).
2.3.1 Polar representation
Recall that in the right-angled triangle in the gure, we have
cos (o) =
r
:
(33)
sin (o) =
:
(34)
: =
_
r
2
+
2
. (35)
r
z=(x,y)
x
y
12
Using this, we can represent the complex number . either by its coordi-
nates, (r. ) or alternatively as :(cos(o) +i sin(o)). The latter form is called
the polar representation and : is the modulus and o is the argument of ..
Usually we measure o in radians.
We will use the following result below.
Result 7 Let . be the complex number (r. ) , then . satises
. = :c
i0
(36)
where : is the modulus of . and o is the argument. In particular, for
: = 1 we get,
c
i0
= cos (o) +i sin (o) (37)
c
i0
= cos (o) +i sin (o) = cos (o) i sin (o) (38)
c
i
= cos (:) +i sin (:) = 1 (39)
Optional proof:
The Taylor formula around zero for any function , (r) is
, (r) = , (0) +
,
0
(0)
1!
r +
,
00
(0)
2!
r
2
+
,
000
(0)
3!
r
3
... (40)
Using this for ,(r) = c
a
. cos(r) and sin(r), and the rules
J
Jr
cos (r) = sin (r) (41)
J
Jr
sin (r) = cos (r) (42)
we have respectively
c
a
= 1 +r +
r
2
2!
+
r
3
3!
+
r
4
4!
... (43)
cos (r) = cos (0) sin(0)r
cos (0)
2!
r
2
+
sin (0)
3!
r
3
(44)
+
cos (0)
4!
r
4
sin (0)
5!
r
5
cos (0)
6!
r
6
... (45)
= 1
r
2
2
+
r
4
4!
r
6
6!
... (46)
sin (r) = sin (0) + cos(0)r
sin (0)
2!
r
2
cos (0)
3!
r
3
(47)
+
sin (0)
4!
r
4
+
cos (0)
5!
r
5
sin (0)
6!
r
6
cos (0)
7!
r
7
... (48)
= r
1
3!
r
3
+
1
5!
r
5
1
7!
r
7
(49)
13
Thus, using the Taylor formula around zero to evaluate ,(io) = c
i0
, we
have
c
i0
= 1 +io +
(io)
2
2!
+
(io)
3
3!
+
(io)
4
4!
+
(io)
5
5!
+
(io)
6
6!
... (50)
= 1 +io
1
2
o
2
1
3!
io
3
+
1
4!
o
4
+
1
5!
io
5
1
6!
o
6
... (51)
= 1
1
2
o
2
+
1
4!
o
4
1
6!
o
6
... (52)
+io
1
3!
io
3
+
1
5!
io
5
... (53)
= cos (o) +i sin (o) (54)
2.3.2 Polynomials
A polynomial 1(.) of order : is dened as weighted sum of .
c
for :
0. .... :, i.e.,
1(.) = c
a
.
a
+c
a1
.
a1
+. .... +c
1
.
1
+c
0
. (55)
for some sequence of constants c
c
a
0
. The following will be important for
our analysis of dierence and dierential equations.
Result 8 A polynomial 1(.) of order : has exactly :, not necessarily dis-
tinct, roots. I.e., it can be expressed as
1(.) = c
a
(. :
1
) (. :
2
) ... (. :
a
) (56)
From (56), we see that each root :
i
is a solution to the equation 1(.) = 0.
Note that the roots may be repeated, i.e.,:
i
= :
)
and that, of course, some
root may be complex. It turns out, also, that complex roots always come in
pairs, the complex conjugates.
14
3 Dierential equations
3.1 Linear Dierential Equations of First Order
A rst order dierential equation is an equation of the form
dr(t)
dt
= _ r (t) = ,(r. t). (1)
As noted above, there will in general be a whole class of functions r(t; c)
(parameterized by c) that satises the dierential equation (1). We need
more information, like an initial condition for r (t
0
) . to pin down the solution
exactly.
Result 9 Given that , is continuous and has continuous rst derivatives,
there is going to be a one function r(t) that satises (1) and an initial con-
dition.
3.1.1 The simplest case
If , is independent of r, the solution is trivial. Then since
_ r (t) = , (t) . (2)
the class of functions satisfying this is a primitive function of , plus any
constant, i.e., for any t
0
and ~ c.
r (t) =
_
t
t
0
, (:) d: + ~ c = 1 (t) 1 (t
0
) + ~ c. (3)
where 1 (t) is any primitive of ,, satises (2). Note that 1(t
0
) is a constant.
We can thus merge the two constants, dening c = ~ c 1 (0) and write
r (t) = 1 (t) +c. (4)
which shows that there is only one degree of freedom in the constants ^ c and
t
0
. Choosing another t
0
simply means that the constant ^ c has to be chosen
in another way. Certainly, for all c, (4) satises (1). For example, if
, (t) = c
ot
. (5)
1(t) =
c
ot
c
(6)
15
is a primitive for , (t) . The arbitrary constant is pinned down with some
other piece of information. So, if we want to nd r(t) and we know the value
of r(0), we get
r (t) = 1 (t) +c (7)
r (0) = 1 (0) + c (8)
c = r (0) 1 (0) (9)
r (t) = 1 (t) +r (0) 1 (0) . (10)
3.1.2 A note on notation
Above, we have used the proper notation of an integral, where both the lower
and upper limits and the dummy variable to integrate over have separate
names and are all written out. Often, a more sloppy notation is used, for an
arbitrary t
0
we write
_
t
t
0
, (:) d: =
_
, (t) dt = 1 (t) (11)
where it is understood that
_
, (t) dt is a (any) primitive of , (t) . This no-
tation, called an indenite integral, saves on variables, but can be confusing
since t is used as both the variable to integrate over and as upper integration
limit. Nevertheless, I will follow ordinary practice and use it below. Using
this notation, we rewrite (3)
r (t) =
_
, (t) dt +c. (12)
3.1.3 A little bit more complicated
Very often, a solution to a more complicated dierential equation is derived
by transforming the original equation into something that has the form of
(2). Linear rst order dierential with constant coecients equations can be
solved directly using such a transformation. Consider
_ (t) +: (t) = . (13)
In this case, we multiply both sides by c
vt
(often called the integrating
factor). After doing that, note that the LHS becomes
c
vt
( _ (t) +: (t)) =
d (c
vt
(t))
dt
. (14)
16
Thus, thinking of c
vt
(t) as simply a function of t, as r(t) in (2), we get
a LHS that is the time derivative of a known function of t and the RHS is
also a function only of t. Lets set t
0
= 0. then the solution is found as in
(3).
d (c
vt
(t))
dt
= c
vt
(15)
c
vt
(t) =
_
t
0
c
vc
d: +c =
_
c
vc
:
_
t
0
+c (16)
=
:
_
c
vt
1
_
+c (17)
(t) = c
vt
_
:
_
c
vt
1
_
+c
_
=
:
+
_
c
:
_
c
vt
. (18)
If we know (t) at some point in time, e.g. at t = 0.
(0) =
:
+
_
c
:
_
= c (19)
(t) =
:
+
_
(0)
:
_
c
vt
(20)
Lets compare this to the case when we use the indenite integral. Then,
we have
d (c
vt
(t))
dt
= c
vt
(21)
c
vt
(t) =
_
c
vt
dt +c =
c
vt
:
+c (22)
=
:
c
vt
+c (23)
(t) =
:
+cc
vt
. (24)
(0) =
:
+c c =
_
(0)
:
_
(25)
(t) =
:
+
_
(0)
:
_
c
vt
(26)
As you see, although the integration constants are dierent, the result
is the same. The diererence is that the constant submerges the primitive
evaluated at the lower integration limit.
3.1.4 Variable exogenous parts
Let us generalize (13) by assuming that = (t), a function of time. Consider
an arbitrage condition where (t) is an asset price, with dividends (t) per
17
unit of time. Not to loose track of things, we use the notation with integration
limits, rather than indenite integrals.
Holding the asset should yield no excess return, i.e.,
_ (t) + (t) = : (t) (27)
_ (t) : (t) = (t) (28)
= c
vt
(t) =
_
t
t
0
c
vc
(:) d: +c (29)
Often, economic theory let us use a so called No-Ponzi condition (typ-
ically requiring : 0) . Here this gives an end condition, since it requires
that
lim
t!1
c
vt
(t) = 0. (30)
Using this, we get
c = lim
t!1
_
t
t
0
c
vc
(:) d: =
_
1
t
0
c
vc
(:) d:. (31)
giving
c
vt
(t) =
_
1
t
0
c
vc
(:) d
_
t
t
0
c
vc
(:) d: (32)
=
_
1
t
c
vc
(:) d: (33)
(t) = c
vt
_
1
t
c
vc
(:) d: =
_
1
t
c
v(ct)
(:) d:. (34)
i.e., that no arbitrage and No-Ponzi implies that the price must equal the
discounted sum of future dividends.
3.1.5 Variable coecients
If also the coecient on the endogenous variable ( (t)) is varying over time,
we need a more general integrating factor to make the LHS od the equation
being the time dierential of a known function. For example, consider the
no-arbitrage equation for an asset price (t) with dividends (t) and interest
rate : (t)
_ (t) + (t) = : (t) (t) = _ (t) : (t) (t) = (t) . (35)
Here the integrating factor is
c
R
t
t
0
v(c)oc
. (36)
18
with
dc
R
t
t
0
v(c)oc
dt
= : (t) c
R
t
t
0
v(c)oc
. (37)
alternatively expressed as
dc
R
r(t)dt
dt
= : (t) c
R
v(t)ot
.
Approximating the integral as a sum of rectangles with base t as in
section 2.2, (which is exact if :(t) were piecewise constant), and dening
: (t
0
+:t) = :
t
s
, for the integers : 0. o . o = (tt
0
),t, the integrating
factor, can be written
c
R
t
t
0
v(c)oc
- c
v
0
t
c
v
1
t
...c
v
S
t
-
_
1
1 +:
1
t
__
1
1 +:
1
t
_
...
_
1
1 +:
S
t
_
.
(38)
i.e., it is product of all short run discount factors between t
0
and t. To save
on notation, this product is denoted as
c
R
t
t
0
v(c)oc
= 1(t; t
0
) . (39)
or if the starting point is suppressed as 1(t) . This has a straightforward
interpretation. Suppose the variable discount rate is given by : (:), then, the
the discounted value of a payment (t) at t seen fromt
0
is 1(t; t
0
) (t) .Using,
the integrating factor, we get
1(t) ( _ (t) : (t) (t)) = 1(t) (t) (40)
d1(t) (t)
dt
= 1(t) (t) . (41)
1(t) (t) =
_
t
t
0
1(:) (:) d: +c (42)
Suppose again, lim
t!1
1(t) (t) = 0. implying c =
_
1
t
0
1(:) (:) d:.
Then, noting that 1(t)
1
1(:) = c
R
t
t
0
v()o
R
s
t
0
v()o
= c
R
s
t
v()o
= 1(:; t)
we have
1(t) (t) =
_
t
t
0
1(:) (:) d: +
_
1
t
0
1(:) (:) d: =
_
1
t
1(:) (:) d: (43)
(t) =
_
1
t
1(t)
1
1(:) (:) d: =
_
1
t
c
R
s
t
v()o
(:) d:. (44)
i.e., that the asset price equals the discounted value of future dividends.
19
As another example, consider money on a bank account with variable
interest rate and deposits (t) .then
_ (t) = : (t) (t) + (t) (45)
_ (t) : (t) (t) = (t) (46)
d1(t. t
0
) (t)
dt
= 1(t. t
0
) (t) (47)
1(t. t
0
) (t) =
_
t
t
0
1(:. t
0
) (:) d: +c (48)
(t) =
_
t
t
0
1(t. t
0
)
1
1(:. t
0
) (:) +1(t. t
0
)
1
c (49)
Since : _ t here, the term 1(t. t
0
)
1
1(:. t
0
) is more conveniently
2
writ-
ten c
R
t
s
v()o
. Using also an initial condition, (t
0
) .we have
(t) =
_
t
t
0
c
R
t
s
v()o
(:) d: +c
R
t
t
0
v()o
(t
0
) . (50)
where the rst term is the period t value of all deposits up until t and the
second term is the period t value of the initial amount on the bank.
3.1.6 Separating variables
Sometimes, we can write a dierential equation such that the LHS only
contains functions of r and _ r and the RHS only a function of t. For example
_ r (t) =
/(t)
q (r)
(51)
_ r (t) q (r) = /(t) . (52)
In this case, we can use the following trick. Let G(r) be any primitive of
q (r), i.e.,
oG(a)
oa
= q (r) . Then,
_ r (t) q (r (t)) =
dG(r (t))
dt
= /(t) (53)
G(r) =
_
/(:) d: +c (54)
We can then recover r by inverting G. An example;
2
But remember
_
s
t
r()d =
_
t
s
r()d.
20
_ r (t) = (r (t) t)
2
(55)
_ r (t) r (t)
2
= t
2
. (56)
Nowlet q (r) = r
2
implying G(r) = r
1
. Then since
oG(a(t))
ot
= _ r (t) q (r (t)) =
_ r (t) r (t)
2
, we have
dG(r (t))
dt
= t
2
(57)
G(r (t)) = r (t)
1
=
_
t
2
dt +c =
t
3
3
+c. (58)
Using the fact that . = G(r) = r
1
= r = .
1
, so the inverse function
is given by G(.) = .
1
, we get
r (t) = G
1
_
t
3
3
+c
_
=
_
t
3
3
+c
_
1
.
3.2 Linear dierential equations of higher order
3.2.1 Linear second order dierential equations
A linear second order dierential equation has the form
(t) +j (t) _ (t) + (t) (t) = 1(t) (59)
This cannot be solved directly by transformations in the simple way we
did with rst order equations. Instead we use a more general method (that
would have worked above also). First some denitions
Denition 1 The homogeneous part of dierential equation is achieved by
setting all exogenous variables, including constants, to zero.
Denition 2 Two functions
1
(t)and
2
(t) are linearly independent in a re-
gion i there is no c
1
. c
2
,= 0. 0 s.t.
c
1
1
(t) = c
2
2
(t) \t . (60)
Result 10 The general solution (the complete set of solutions) to a dier-
ential equation is the general solution to the homogeneous part plus any par-
ticular solution to the complete equation.
The general solution of the homogeneous part of a second order linear
dierential equation can be expressed as c
1
1
(t) + c
2
2
(t), where
1
(t) and
2
(t) are two linearly independent particular solutions to the homogeneous
equations and c
1
and c
2
are some arbitrary integration constants.
21
3.2.2 Homogeneous Equations with Constant Coecients
Consider the homogeneous part of a dierential equation, given by
(t) +j _ (t) + (t) = 0. (61)
To solve this equation, we rst dene the characteristic equation, for this
equation;
:
2
+j: + = 0. (62)
Since this is a second-order polynomial, it has two roots
:
1,2
=
1
2
j
1
2
_
(j
2
4). (63)
Now, it is straightforward to see that c
v
1
t
and c
v
2
t
both are solutions to
the homogeneous equation, by noting that for i 1. 2
dc
v
i
t
dt
= :
i
c
v
i
t
(64)
d
2
c
v
i
t
dt
2
= :
2
i
c
v
i
t
(65)
d
2
c
v
i
t
dt
2
+j
dc
v
i
t
dt
+c
v
i
t
=
_
:
2
i
+j:
i
+
_
c
v
i
t
= 0. (66)
So then, result 10 tells us that the general solution to the homogenous
equation is
I
(t) = c
1
c
v
1
t
+c
2
c
v
2
t
. (67)
provided c
v
1
t
and c
v
2
t
are linearly independent. It is easy to verify that this
is the case, if and only if :
1
,= :
2
.
3.2.3 Complex roots
Complex roots pose no particular diculty, we simply have to recall that for
any real number /
c
bi
= cos (/) +i sin (/) . (68)
c
bi
= cos (/) i sin (/) . (69)
yielding for the complex roots :
1,2
= c /i.
c
1
c
(o+bi)t
+c
2
c
(obi)t
= (70)
c
1
c
ot
(cos (/t) +i sin (/t)) +c
2
c
ot
(cos (/t) i sin (/t)) (71)
= c
ot
((c
1
+c
2
) cos (/t) + (c
1
c
2
) i sin (/t)) (72)
22
Note here, that c
1
and c
2
are any constants in the space of complex
numbers are. Dening
c
1
+c
2
= c
1
. (73)
(c
1
c
2
) i = c
2
. (74)
It turns out that for any c
1
and c
2
on the real line, we can nd c
1
and c
2
satisfying this denition. Since we, at least in economics, are only interested
in solutions in the real space, we can use the restricted set of constants
satisfying c
1
and c
2
being on the real line. We can then write the general
solution in the real space as
I
(t) = c
ot
( c
1
cos (/t) + c
2
sin (/t)) . (75)
3.2.4 Repeated roots
The general solution to the homogenous equation in the case when the roots
are repeated, i.e., :
1
= :
2
= : is
I
(t) = c
1
c
vt
+c
2
tc
vt
. (76)
Convince yourself that they are linearly independent and check that they
are both solutions if the roots are repeated but not otherwise!
3.2.5 Non-Homogeneous Equations with Constant Coecients
Relying on result 10, the only added problemwhen we have a non-homogeneous
equation is that we have to nd one particular solution to the complete equa-
tion. Consider
(t) +j _ (t) + (t) = 1(t) . (77)
Typically we guess a form of this solution and then use the method of un-
determined coecients. Often a good guess is a solution of a form similar
to 1(t), e.g., if it is a polynomial of degree : we guess on a general :
0
t/ de-
gree polynomial with unknown coecients. The simplest example is if 1(t)
equals a constant 1, we then guess on a constant,
j
(t) =
cc
, i.e., a steady
state, in which case (t) and _ (t) both are zero. To satisfy the dierential
equation,
cc
= 1 (78)
cc
=
1
. (79)
23
Another example is
(t) 2 _ (t) + (t) = 3t
2
+t. (80)
in which case we guess
j
(t) = t
2
+1t +C. (81)
for some, yet undetermined coecients . 1 and C. We then solve for
these constants by substituting into the dierential equation
2 2 (2t +1) +t
2
+1t +C = 3t
2
+t. (82)
For this to hold for each t, we need
= 3 (83)
4 +1 = 1 (84)
2 21 +C = 0 (85)
yielding = 3. 1 = 13. and C = 20. So a particular solution is
j
(t) = 3t
2
+ 13t + 20. (86)
The characteristic equation is
:
2
2: + 1 = (: 1)
2
:
1,2
= 1. (87)
So the general solution is
(t) = c
1
c
t
+c
2
tc
t
+ 3t
2
+ 13t + 20. (88)
3.2.6 Linear nth Order Dierential Equations
Consider the nth order dierential equation
a
(t) +1
1
a1
(t) +... +1
a
(t) = 1(t) . (89)
where
a
(t) =
d
a
(t)
dt
a
. (90)
The solution technique is here exactly analogous to the second order case.
First, some denitions
24
Denition 3 For any dierentiable function (t), the dierential operator
1 is dened as
1 (t) =
d (t)
dt
. (91)
satisfying 1
a
(t) =
o
n
j(t)
ot
n
.
Using this denition, and dening the following polynomial,
1 (:) = :
a
+1
1
:
a1
+... +1
a
. (92)
we have
1 (1) (t) =
_
1
a
+1
1
1
a1
+... +1
a
_
(t) (93)
= 1
a
(t) +1
1
1
a1
(t) +... +1
a
(t) (94)
so we can write (89) in the condensed form,
1 (1) (t) = 1(t) (95)
and the characteristic equation is
1 (:) = 0. (96)
with roots :
1,...,a
The general solution to the homogenous part is then the sum of the :
solutions corresponding to each of : roots. The only thing to note is that if
one root, :, is repeated / _ 2 times, the solutions corresponding to this root
are given by
c
1
c
vt
+c
2
tc
vt
+... +c
I
t
I1
c
vt
. (97)
Repeated complex roots are handled the same way. Say the root c /i
is repeated in / pairs. Their contribution to the general solution is given by
c
ot
(c
1
cos (/t) +c
2
sin (/t) +t (c
3
cos (/t) +c
4
sin (/t))) +... (98)
+c
ot
t
I1
(c
2I1
cos (/t) +c
2I
sin (/t)) . (99)
A particular solution to the complete equations can often be solved by
the method of undetermined coecients.
25
3.3 Stability
From the solutions to the dierential equations we have seen we nd that
the terms corresponding to roots that have positive real parts (unstable, or
explosive roots) tend to explode as t goes to innity. This means that also
the solution explodes unless the corresponding integration constants are zero.
Terms with roots that have negative real parts (stable roots), on the other
hand, always converge to zero.
If all roots are strictly negative for a dierential equation, 1 (1) (t) = :,
the system converges to a unique point as time goes to innity, wherever it
starts. This point is often called a sink and the system is dened as globally
stable.
If all roots are strictly positive the system is globally unstable, but there
is still a steady-state, this is sometimes called the origin. The reason for this
is that if the system is unstable in all dimensions when time goes forward,
it will be stable in all dimensions if time goes in reverse. Starting from any
point and going backward, one reaches, in the limit the steady state, i.e., it
is the origin of all paths.
If a system has both stable and unstable roots, it is called saddle-path
stable. Then, in some sub-dimensions it is stable.
3.3.1 Non-linear equations and local stability
Look at the nonlinear dierential equation
_ r (t) = r (t)
2
4. (100)
Although we have not learned how to solve such an equation, we can say
something about it. Let us plot the relation r (t) _ r (t)
26
( ) x t &
( ) x t
1 -1 2 -2
_ r (t) = r (t)
2
4
We see that r(t) = 2 and r(t) = 2 are stationary points. We also see
that r(t) = 2 is locally stable. In the region [. 2) r converge to r = 2.
2 is an unstable stationary point and in the region (2. ], r explodes over
time. In a plot r (t) _ r (t) . local stability is equivalent to a negative slope
at the stationary point.
3.4 Systems of Linear First Order Dierential Equa-
tions
Consider the following system of two rst order dierential equations
_
_
1
(t)
_
2
(t)
_
=
_
c
11
c
12
c
21
c
22
_ _
1
(t)
2
(t)
_
+
_
j
1
(t)
j
2
(t)
_
(101)
_ y (t) = Ay (t) +p(t) (102)
As in the one equations case we start by nding the general solutions
to the homogeneous part. This plus some particular solution is the general
solution to the complete system.
If the o diagonal terms are zero the solution to the homogeneous part is
trivial, since there is no interdependency between the equations
_
_
1
(t)
_
2
(t)
_
=
_
c
11
0
0 c
22
_ _
1
(t)
2
(t)
_
(103)
27
_
1
(t)
2
(t)
_
=
_
c
o
11
t
0
0 c
o
22
t
_ _
c
1
c
2
_
. (104)
The system in (103) has an important property, time has no direct eect
on the law-of-motion. Given knowledge of
1
(t) and
2
(t), _
1
(t) and _
2
(t)
are fully determined, regardless of t. A system that has this property is called
autonomous. The system (101) is not an autonomous system, unless p(t) is
constant. Note that the homogeneous solution is always autonomous, given,
of course, that the parameters are constant.
The behavior of an autonomous system can be depicted in a graph, a
phase diagram. We see in the phase diagram that if the roots are stable,
i.e., negative, the homogeneous part always goes to zero as t goes to innity.
With only one root stable, there is just one stable path.
y
2
y
1
y
2
y
1
Two stable roots One stable roots
The fact that it is trivial to solve a diagonal system, suggests a way of
nding the solution to the homogeneous part of (101). Suppose we can make
a transformation of the variables so that the transformed system is diagonal.
Start by dening the new set of variables
_
r
1
(t)
r
2
(t)
_
= B
_
1
(t)
2
(t)
_
(105)
_
_ r
1
(t)
_ r
2
(t)
_
= B
_
_
1
(t)
_
2
(t)
_
= BA
_
1
(t)
2
(t)
_
= BAB
1
_
r
1
(t)
r
2
(t)
_
(106)
If we can nd a B such that BAB
1
is diagonal we are half way. The
28
solutions for x is then
(107)
_
r
1
(t)
r
2
(t)
_
=
_
c
v
1
t
0
0 c
v
2
t
_ _
c
1
c
2
_
(108)
where :
1,2
are the diagonal terms of the matrix BAB
1
.The solution for
y then follows from the denition of x
_
1
(t)
2
(t)
_
= B
1
_
c
v
1
t
0
0 c
v
2
t
_ _
c
1
c
2
_
. (109)
From linear algebra we know that B
1
is the matrix of eigenvectors of A
and that the diagonal terms of BAB
1
are the corresponding eigenvalues.
The eigenvalues are given by the characteristic equation of A
_
c
11
: c
12
c
21
c
22
:
_
= 0 (110)
(c
11
:) (c
22
:) c
12
c
21
= 0 (111)
:
2
:1: (A) +[A[ = 0. (112)
where 1: (A) is the trace of A. The only crux is that we need the roots to
be distinct, otherwise B
1
is not always invertible. Distinct roots imply that
B
1
is invertible. (If A is symmetric B
1
is also invertible.)
Let us draw a phase diagram with the eigenvectors of A. The dynamic
system behaves as the diagonal one but the eigenvectors have replaced the
standard, orthogonal axes.
29
y
2
y
2
y
1
One stable roots
One stable root
What is remaining is to nd a particular solution of the complete system.
One way is here to use the method of undetermined coecients. Sometimes,
we can nd a steady state of the system, i.e., a point where the time deriv-
atives are all zero. This is easy if the exogenous part is constant. We then
set the dierential equal to zero so
0 = Ay (t) +p (113)
y
j
(t) = y
cc
= A
1
p. (114)
Given that we know the value of y (0) we can now give the general solu-
tion. The formula is given in matrix form and is valid for any dimension of
the system. First dene
r (t) =
_
_
c
v
1
t
0 .. 0
0 c
v
2
t
.. 0
. . . .
0 0 . c
v
n
t
_
_
. (115)
30
then we have
y (t) = B
1
x(t) +y
cc
= B
1
r (t) c +y
cc
(116)
y (0) = B
1
x(0) +y
cc
= B
1
c +y
cc
(117)
c = B(y (0) y
cc
) (118)
y (t) = B
1
r (t) B(y (0) y
cc
) +y
cc
. (119)
The method outlined above works also in the case of complex roots of the
characteristic equation. If the roots are c /i we have
y (t) = B
1
_
c
(o+bi)t
0
0 c
(obi)t
_
c +y
cc
(120)
Example;
_
_
1
(t)
_
2
(t)
_
=
_
1 1
1 1
_ _
1
(t)
2
(t)
_
+
_
1
1
_
(121)
:
1,2
= 1 i (122)
B
1
=
_
i i
1 1
_
. (123)
So,
y (t) = (124)
_
i i
1 1
_ _
c
t
(cos t +i sin t) 0
0 c
t
(cos t i sin t)
_ _
c
1
c
2
_
+y
cc
(125)
= c
t
_
i (c
1
c
2
) cos t (c
1
+c
2
) sin t
(c
1
+c
2
) cos t +i (c
1
c
2
) sin t
_
+y
cc
(126)
= c
t
_
~ c
1
cos t + ~ c
2
sin t
~ c
2
cos t + ~ c
1
sin t
_
+y
cc
(127)
The steady-state is found from,
0 =
_
1 1
1 1
_ _
1
(t)
2
(t)
_
+
_
1
1
_
(128)
_
cc
1
cc
2
_
=
_
1 1
1 1
_
1
_
1
1
_
=
_
0
1
_
. (129)
If we know y (0), and using cos 0 = 1 and sin 0 = 0.we get
_
1
(0)
2
(0)
_
=
_
~ c
1
~ c
2
_
+
_
0
1
_
(130)
y (t) = c
t
_
1
(0) cos t (
2
(0) 1) sin t
(
2
(0) 1) cos t +
1
(0) sin t
_
+
_
0
1
_
(131)
31
A phase-diagram of this system is an inward spiral.
-0.75 -0.5 -0.25 0.25 0.5 0.75 1
-1
-0.5
0.5
1
1.5
2
y
2
y
1
(132)
How would it look like if the real part of the root was 1 or +1?
3.4.1 Equivalent Systems
In the repeated root case the matrix of the eigenvectors may be singular, so
that we cannot nd B
1
. Then we use the method of equivalent systems,
described in this section.
A linear nth order dierential equation is equivalent to a system of : rst
order dierential equations. Consider,
...
(t) +c
1
(t) +c
2
_ (t) +c
3
(t) = 1(t) (133)
We can transform this into a system of rst order dierential equation by
dening
_ (t) = r
1
(t) . (134)
(t) = r
2
(t) = _ r
1
(t) . (135)
...
(t) = _ r
2
(t) . (136)
giving
_
_
_ (t)
_ r
1
(t)
_ r
2
(t)
_
_
=
_
_
0 1 0
0 0 1
c
3
c
2
c
1
_
_
_
_
(t)
r
1
(t)
r
2
(t)
_
_
+
_
_
0
0
1(t)
_
_
(137)
32
Since the equations are equivalent they consequently have the same so-
lutions. Sometimes one of the transformations is more convenient to solve.
Let us also transform a two dimensional system into a second order equation
_
_ r
1
(t)
_ r
2
(t)
_
=
_
c
11
c
12
c
21
c
22
_ _
r
1
(t)
r
2
(t)
_
+
_
/
1
/
2
_
. (138)
First use the rst equation to express r
2
and then take the time derivative
of the rst. Then we can eliminate r
2
and its time derivative
_ r
1
(t) = c
11
r
1
(t) +c
12
r
2
(t) +/
1
(139)
r
2
(t) =
_ r
1
(t) c
11
r
1
(t) /
1
c
12
(140)
_ r
2
(t) =
r
1
(t) c
11
_ r
1
(t)
c
12
(141)
r
1
(t) c
11
_ r
1
(t)
c
12
= c
21
r
1
(t) +c
22
_ r
1
(t) c
11
r
1
(t) /
1
c
12
+/
2
(142)
r
1
(t) (c
11
+c
22
) _ r
1
(t) + (c
22
c
11
c
12
c
21
) r
1
(t) = c
22
/
1
+c
12
/
2
(143)
Note that the characteristic equation of this second order equation is the
same as the one for the system in (138). Consequently, the roots and thus
the dynamics are identical.
3.5 Non-linear systems and Linearization
Phase diagrams are convenient to analyze the behavior of a 2 dimensional
system qualitatively that we cannot or prefer not to solve explicitly. E.g., if
_ c (t) = q
1
(c (t) . / (t)) (144)
_
/ (t) = q
2
(c (t) . / (t)) (145)
The rst step here is to nd the two curves in the c. /-space where c
and /, respectively are constant. Setting the time derivatives equal to zero
denes two relations between c and /, which we denote by G
1
and G
2
.
q
1
(c (t) . / (t)) = 0 c = G
1
(/) (146)
q
2
(c (t) . / (t)) = 0 c = G
2
(/) (147)
We then draw these curves in the c. /-space. For example, you will in the
macro course analyze the Ramsey optimal consumption problem, where out-
put is , (/) . interest rate is ,
0
(/) . the subjective discount rate is o and utility
33
is n(c) where n and , are assumed to be concave functions. The model will
deliver the following system of dierential equations
_ c (t) =
n
0
(c (t))
n
00
(c (t))
(,
0
(/ (t)) o) (148)
_
/ (t) = , (/ (t)) c (t) . (149)
Setting the time derivatives to zero we get
,
0
(/ (t)) = o. (150)
c = , (/ (t)) . (151)
Draw these curves in the c. / space
=
k t ( ) 0
=
c t ( ) 0
c
k
We then have to nd the signs of time derivatives, and above and below
their respective zero motion curves. From (148), we see that
J _ c (t)
J/
=
n
0
(c)
n
00
(c)
,
00
(/) < 0 (152)
J
_
/ (t)
Jc
= 1. (153)
This means that _ c is positive to the left of and negative to the right of
the curve _ c = 0. For
_
/, we nd that it is positive below and negative above
_
/ = 0. Then draw these motions as arrows in the phase diagram. Note that
no paths ever can cross.
34
=
k t ( ) 0
=
c t ( ) 0
c
k
We conclude that this system has saddle point characteristics and thus
has only one stable trajectory towards the steady state.
The behavior close to the steady state should also be evaluated by means
of linearization around the steady state. We do that by approximating in
the following way
_
_ c (t)
_
/ (t)
_
-
_
0j
1
(c
ss
,I
ss
)
0c
0j
1
(c
ss
,I
ss
)
0I
0j
2
(c
ss
,I
ss
)
0c
0j
2
(c
ss
,I
ss
)
0c
_
_
c (t) c
cc
/ (t) /
cc
_
. (154)
with an obvious generalization to higher dimensions.
We now evaluate the roots of the matrix of derivatives. In the example
we nd that the coecient matrix is
_
0j
1
(c
ss
,I
ss
)
0c
0j
1
(c
ss
,I
ss
)
0I
0j
2
(c
ss
,I
ss
)
0c
0j
2
(c
ss
,I
ss
)
0c
_
(155)
=
_
(,
0
o)
0
0c
_
&
0
&
00
_
&
0
&
00
,
00
1 ,
0
_
. (156)
where all functions are evaluated at the steady state. There, ,
0
= o, implying
that the matrix simplies to
_
0
&
0
&
00
,
00
1 ,
0
_
(157)
35
with eigenvalues
1
2
_
,
0
_
(,
0
)
2
+ 4
n
0
n
00
,
00
_
(158)
which clearly are of opposite signs.
3.6 Example: Steady-state asset distributions
In this example, we use our derived skills to nd the steady-state wealth
distribution in a simple model. As we will see, here the model gives us a
system of dierential equations for the wealth distribution that we can solve
easily. We will see that the methods we have learned work as well when the
dierential equations are in wealth, rather than time.
In (Hassler and Mora, JPub, 1999), we analyze preferences over unem-
ployment insurance in a very simple continuous time economy where agents
can borrow and save at rate :. In the model, employed individuals earn a
wage n per unit of time and become unemployed with an instantaneous prob-
ability . This means that over a small (innitesimal) interval of time dt. the
probability of becoming unemployed is dt. Unemployed individuals receive
a ow of unemployment benets / and become rehired with instantaneous
probability /dt. In addition, we assume that there is a instantaneous death
probability of o and that there is an inow of newborn unemployed with zero
assets of o so that the total population is constant.
In the paper, we show that if individuals have CARA utility (l = c
c
)
and wages and benets are constant, individuals choose constant savings
amounts for each of the two employment states. Denoting these :
c
and :
&
,
where :
c
0 and :
&
< 0, we have that individual asset accumulation for the
two types, conditional on surviving is
_
t
= :
c
. (159)
for employed and
_
t
= :
&
(160)
for unemployed.
In the paper, we dont calculate the steady state wealth distribution of
assets. Thats the purpose of this exercise. First, we calculate the steady
state share of unemployed, j
&
. For this purpose, we note that in steady state,
the inow and the outow to the stock of unemployed must be constant.
Thus,
36
(1 j
&
) +o = j
&
(/ +o) (161)
j
&
=
+o
+/ +o
(162)
Now, we want to calculate the steady state distribution of assets among
employed and unemployed in this economy. Let us denote these densities,
by ,
c
() and ,
&
(). We will derive these by solving a system of dieren-
tial equations. Consider rst the number (density) of employed individuals
with assets ,= 0, given by ,
c
(). Over a small period dt, a number
,
c
() (1 (o +) dt) of them remain employed and alive. Over the same
time, a number ,
&
()/dt of unemployed with assets become hired. Fi-
nally, over the period dt these individuals add to their assets an amount
:
c
dt. Writing this down yields,
,
c
( +:
c
dt) = ,
c
() (1 (o +) dt) +,
&
()/dt. (163)
,
c
() +,
0
c
():
c
dt = ,
c
() (1 (o +) dt) +,
&
()/dt (164)
,
0
c
() = ,
c
()
o +
:
c
+,
&
()
/
:
c
. (165)
By the same reasoning,
,
&
( +:
c
dt) = ,
&
() (1 (o +/) dt) +,
c
()dt. (166)
,
&
() +,
0
&
():
c
dt = ,
&
() (1 (o +/) dt) +,
c
()dt (167)
,
0
&
() = ,
&
()
o +/
:
&
+,
c
()
:
&
. (168)
yielding the system
_
,
0
c
()
,
0
&
()
_
=
_
c+q
c
e
I
c
e
q
c
u
c+I
c
u
_ _
,
c
()
,
&
()
_
. (169)
= A
_
,
c
()
,
&
()
_
(170)
Let us now, assign some numbers to the parameters. Say o = 1,40. / =
2. = 1,5. :
c
= 1 and :
&
= 2, all measured as probabilities per year. Then,
we can calculate the roots and the eigenvalues
:
1
=
63
160
+
1
160
_
4681 - .821 (171)
:
2
=
63
160
1
160
_
4681 - 0.0339 (172)
B
1
=
_
1 1
99
320
+
1
320
_
4681
99
320
1
320
_
4681
_
-
_
1.0 1.0
.523 0.0956
_
. (173)
37
Thus, except at = 0, where there is an inow of newborn, that we have
not considered, we have
_
,
c
()
,
&
()
_
= B
1
r (t) c (174)
=
_
1.0 1.0
.523 0.0956
_ _
c
.821
0
0 c
0.0339
_ _
c
1
c
2
_
(175)
=
_
c
1
c
.821
+c
2
c
.0339
.523c
1
c
.821
+.0956c
2
c
.0339
_
(176)
Now, we rst note that ,
c
() and ,
&
() cannot be explosive in any of
the directions. That would violate that these functions are densities, i.e., the
sum of their respective integrals over the real line must be unity. Thus, for
< 0, c
2
must be zero and for 0. c
1
= 0. Furthermore, we know
_
1
1
,
c
()d = 1 j
&
=
/
+/ +o
- 0.899 (177)
_
1
1
,
&
()d = j
&
=
+o
+/ +o
- 0.101. (178)
Using this, we can calculate the integrations constants.
_
1
1
,
c
()d =
_
0
1
c
1
c
.821
d +
_
1
0
c
2
c
.0339
d (179)
=
c
1
0.821
+
c
2
.0339
= 0.899 (180)
_
1
1
,
&
()d =
_
0
1
.523c
1
c
.821
d +
_
1
0
.0956c
.0 339
d (181)
=
.523c
1
0.821
+
0.0956c
2
.0339
= 0.101 (182)
Yielding,
c
1
= 0.0289. (183)
c
2
= 0.0293. (184)
This concludes our calculations,
,
c
() =
_
0.0289c
. 821
for < 0
0.0293c
.0339
for 0
. (185)
,
&
() =
_
0.0151c
.821
for < 0
0.00280c
.0 339
for 0
. (186)
38
0
0.01
0.02
0.03
0.04
-5 -4 -3 -2 -1
A
0
0.01
0.02
0.03
0.04
20 40 60 80 100
A
Plotting the densities, using a solid (dotted) line to denote wealth of
employed (unemployed), we see that unemployed are more concentrated to
the left. We can also calculate average assets among the two types from
c
=
_
1
1
,
c
()d = (187)
_
0
1
0.0289c
.821
d +
_
1
0
0.0293c
.0 339
d - 25.45 (188)
&
=
_
1
1
,
&
()d = (189)
_
0
1
0.0151c
.821
d +
_
1
0
0.00280c
.0 339
d - 2.41 (190)
39
4 Dierence equations
4.1 Sums, forward and backward solutions
4.1.1 Sums vs. Integrals
Dierence equations can be solved in ways very similar to how we solve
dierential equations. First, we will look at the analogy to integrals.
Consider the dierence equation
t1
=
t
=
t
(1)
where
t
is the change in per unit (interval) of time. We sum both sides
from some date t
0
until t to get
t
c=t
0
c
=
t
t
0
1
=
t
c=t
0
c
. (2)
We can then write the solution as
t
=
t
(3)
t
=
t
c=t
0
c
+
t
0
1
. (4)
As we see, this is very much like integrals
d(t)
dt
= (t) (5)
(t) =
_
t
t
0
(:) d: +
t
0
. (6)
and the relation between
t
and
t
c=t
0
c
is the same as between (t) and its
primitive, since
c=t
0
c
=
t
. (7)
4.1.2 Forward and backward solutions
The part
t
c=t
0
t
of the RHS of (4) is exogenous, i.e., independent of
t
.
Sometimes, it converges in one or both directions. Then we can write the
solutions in another way. Suppose the following limit exists
40
lim
T!1
t
c=T
c
=
t
c=1
c
. (8)
Then, the other part of RHS of (4) should also have a well-dened limit
lim
T!1
T
= . (9)
so that the solution is
t
=
t
c=1
c
+. (10)
Clearly, (10) solves (1),
t1
=
t
c=1
c
+
t1
c=1
c
=
t
. (11)
If the solution in (10) exists, it is called the backward solution.
Analogously, the limit
lim
T!1
T
c=t
c
=
1
c=t
c
. (12)
might exist, in which case
lim
T!1
T
=
(13)
also exists. Then, we have
lim
T!1
(
T
t
) =
1
c=t+1
c
. (14)
t
=
1
c=t+1
c
. (15)
which is called the forward solution.
Example. Suppose
t
follows a simple 1(1) process
t
=
t
. (16)
t
= :
t1.
(17)
41
If : < 1. we can use the forward solution and if : 1, the backward
solution works. In the former case,
1
c=t+1
c
=
1
c=0
t+1
:
c
=
t+1
1 :
. (18)
t
=
t+1
1 :
. (19)
where
is determined from, for example, an initial condition. In the latter
case,
t
c=1
c
=
1
c=0
t
:
c
=
t
:
: 1
(20)
t
=
t
:
: 1
+. (21)
4.1.3 First order dierence equations with constant coecients
A rst order dierence equation with constant coecients has the following
form.
r
t
cr
t1
= c (22)
As we see, the LHS is not a pure dierence, as in (1), so we cannot simply
sum over t. Instead we rely on the following result.
Result 11 The general solution (the complete set of solutions) to a dier-
ence equation is the general solution to the homogeneous part plus any par-
ticular solution to the complete equation.
The general solution to the homogeneous rst order dierence equation
with coecient c can be written
r
I
t
= c
t
(23)
where is an arbitrary constant.
A particular solution is sometimes the steady state which exists for (22)
if c ,= 1.
r
cc
cr
cc
= c (24)
r
cc
=
c
1 c
. (25)
42
4.1.4 Non-constant RHS
Now look at
r
t
cr
t1
=
t
. (26)
A way to solve (26) is to use c
t
as the analogue to the integrating factor.
Multiplying both sides by c
t
yields
c
t
(r
t
cr
t1
) = c
t
t
. (27)
Now, we see that the LHS can be written (c
t
r
t
), implying
_
c
t
r
t
_
= c
t
t
. (28)
= c
t
r
t
=
t
c=t
0
c
c
c
+. (29)
c
t
r
t
=
t
c=t
0
c
c
c
+. (30)
r
t
= c
t
+
t
c=t
0
c
tc
c
. (31)
Again, we should verify that this satises our original dierence equation.
4.1.5 Stable growth the Solow growth model
Often in macro, the variables in the model grow in a way that precludes the
existence of a steady state. However, some transformation of the variables
might possess a steady state. The simplest example of this is the Solow
growth model We will rst solve the model in trending variables and then
do it in transformed (detrended) variables. As we know, the savings rate is
exogenous and denoted o and the labor supply `
t
follows
`
t
= c
j
`
t1
- (1 +q) `
t1
. (32)
There is one good, used for consumption and as capital, which follows
the law-of-motion
1
t+1
= o, (1
t
. `
t
) . (33)
where , (1
t
. `
t
) is a concave production function. Let us specify production
as the CRS Cobb-Douglas function,
, (1
t
. `
t
) = `
1c
t
1
c
t
. (34)
1 c 0. (35)
43
Letting lower case letters denote natural logarithms, we have
/
t+1
= : + (1 c) :
t
+c/
t
. (36)
/
t+1
c/
t
= : + (1 c) :
t
. (37)
and
:
t
= q (38)
:
t
=
t
0
q +: = tq +:. (39)
for some constant :.
Here, we can guess on a particular solution of the same form as the RHS,
i.e.,
/
t
= / +tq
I
(40)
/ + (t + 1) q
I
c(/ +tq
I
) = : + (1 c) :
t
(41)
/ (1 c) +t (1 c) q
I
+q
I
= : + (1 c) : +t (1 c) q (42)
For this to hold for all t, we need
q
I
= q. (43)
/ =
: q
1 c
+:. (44)
giving a particular solution
/
j,t
=
: q
1 c
+: +tq. (45)
This solution is in economics often called a balanced growth path, in which
all variables grow at the same rate. The complete solution is
/
t
= c
t
+
: q
1 c
+: +tq. (46)
/
t
=
_
/
0
_
: q
1 c
+:
__
c
t
+
: q
1 c
+: +tq. (47)
In this case, a convenient alternative is to dene a new variable, capital
per capita, C
t
= 1
t
,`
t
c
t
= /
t
:
t
. Using this, and dividing (34) by `
t
,
44
we get
1
t+1
`
t
=
o, (1
t
. `
t
)
`
t
(48)
1
t+1
`
t+1
`
t+1
`
t
=
o`
1c
t
1
c
t
`
t
= (49)
C
t+1
c
j
= o
_
1
t
`
t
_
c
(50)
c
t+1
+q = : +cc
t
(51)
c
t+1
cc
t
= : q (52)
c
cc
=
: q
1 c
(53)
c
t
=
_
c
0
:
1 c
_
c
t
+
:
1 c
. (54)
coinciding with the solution in (46) but now expressed as steady state in
capital per capita, rather than a balanced growth path for capital.
We can also look at the log of output per capita,
t
= cc
t
. (55)
t
=
_
0
c
:
1 c
_
c
t
+c
:
1 c
. (56)
This is the basis for the so-called growth regressions, pioneered by Barro,
0
=
0
_
1 c
t
_
+
:
1 c
c
_
1 c
t
_
. (57)
where growth over a sample period 0 through t is seen to depend negatively
on initial output and positively on savings. Also,
t1
=
_
0
c
:
1 c
_
c
t
+c
:
1 c
(58)
0
c
:
1 c
_
c
t1
c
:
1 c
(59)
=
_
0
c
:
1 c
_
c
t1
(1 c) (60)
=
_
c
:
1 c
t1
_
(1 c) . (61)
As we see, the growth rate (the log dierence) of output per capita is a
fraction (1 c) of the dierence between the steady state and current output
45
per capita. Note that convergence is slower the larger is the capitals share
of output. In the limit when c 1, the model shows no convergence and
has become an endogenous growth model, with
/
t
= :. (62)
c
t
=
t
= : q. (63)
4.2 Linear dierence equations of higher order
4.2.1 Higher order homogeneous dierence equations with con-
stant coecients
Consider the homogeneous dierence equation
r
t+a
+c
1
r
t+a1
+...c
a
r
t
= 0. (64)
Denition 4 The forward operator E is dened by
1
c
r
t
= 1r
t+c
(65)
where : is any integer, positive or negative.
We can then write (64) in a condensed polynomial form
1 (1) r
t
= 0. (66)
We then have to nd the roots of the equation
1 (:) = :
a
+c
1
:
a1
+...c
a
= 0 (67)
Each root contributes to the general solution with one term that is inde-
pendent of the others, exactly as with dierential equations.
Result 12 Let :
c
denote the roots to the polynomial 1 (:), i.e., all solutions
to 1(:) = 0. Let the rst / _ 0 roots be distinct and the remaining | = :/
roots repeated. Then, the general solution to (64) is
r
t
= c
1
:
t
1
+... +c
I
:
t
I
+:
t
I+1
_
c
I+1
+tc
I+2
+... +t
|1
c
I+|
_
(68)
If there is more than one set of repeated roots, each set of size : _ 2 con-
tributes with : linearly independent terms
:
t
I+1
_
c
I+1
+tc
I+2
+... +t
n1
c
I+n
_
. (69)
46
In the case of complex roots, express the complex number in polar form
3
: = c +/i = [:[ (cos o +i sin o) . (70)
[:[ =
_
c
2
+/
2
. (71)
o = tan
1
/
c
. (72)
We then use the fact that
c
i0
= (cos o +i sin o) (73)
: = [:[ c
i0
:
t
= [:[
t
c
i0t
= [:[
t
(cos to +i sin to) (74)
to get for the complex conjugates :
1,2
= c /i.
c
1
:
t
1
+c
2
:
t
2
= [:[
t
(~ c
1
cos to + ~ c
2
sin to) (75)
which we can see is a generalization of (68). Complex roots thus give us
oscillating solutions.
4.2.2 Stability
From the solutions (68) and (75), it is clear that roots such that [:[ < 1.give
converging terms.
4.2.3 Higher order non-homogeneous dierence equations with
constant coecients
Result 13 The general solution to a non-homogeneous dierence equation
with constant coecients is given by the general solution to the homogeneous
part plus any solution to the full equation.
To solve non-homogeneous equations we thus have to nd particular so-
lution to the complete equation to add to the general solution of the homo-
geneous part. The simplest non-homogeneous dierence equation is
1 (r
t
) = c. (76)
3
Note that the formula 0 = tan
1 b
a
is valid only for 0 (,2, ,2] . For example, if
r
1;2
=
1
2
1
2
_
3i we note that
1
2
+
1
2
_
3i = c
i
4
3
1
2
1
2
_
3i = c
i
4
3
.
47
Here, we try a steady state
1 (r
cc
) = 1 (1) r
cc
= c (77)
r
cc
=
c
1 (1)
(78)
provided 1 (1) ,= 0.
In the more general case we often have to guess a particular solution.
4.3 Systems of linear rst order dierence equations
Systems of rst order dierence equations are solved with the diagonalization
method that we also used for the dierential equation.
_
_
r
1,t+1
r
2,t+1
.
r
a,t+1
_
_
= A
_
_
r
1,t
r
2,t
.
r
a,t
_
_
+P (79)
x
t+1
= Ax
t
+P. (80)
First we nd the diagonalizing matrix of eigenvectors B
1
. Then we solve
the homogeneous equation by dening
y
t+1
= Bx
t+1
(81)
= BAx
t
(82)
= BAB
1
y
t
(83)
=
_
_
:
t
1
0 . 0
0 :
t
2
. 0
. . . .
0 0 . :
t
a
_
_
_
_
c
1
c
2
.
c
a
_
_
(84)
= r
t
c. (85)
Then we transform back
4
x
t+1
= B
1
y
t+1
= B
1
r
t
c. (86)
A particular solution to the complete equation then has to be added.
Here we might try to nd a steady state as a particular solution
4
Since the vector of constants, c, is arbitrary, we may equally well write x
t
= B
1
r
t
c.
48
x
cc
= Ax
cc
+P. (87)
x
cc
= (I A)
1
P. (88)
If we have the initial conditions we nd that
x
t+1
= B
1
r
t
c +x
cc
(89)
x
1
= B
1
r
0
c +x
cc
(90)
= B
1
c +x
cc
(91)
c = B(x
1
x
cc
) (92)
x
t+1
= B
1
r
t
B(x
1
x
cc
) +x
cc
(93)
An example;
x
t+1
=
_
0 1
1 0
_
x
t
+
_
1
1
_
. (94)
x
cc
=
__
1 0
0 1
_
_
0 1
1 0
__
1
_
1
1
_
=
_
1
0
_
(95)
Eigenvalues are
:
1,2
= i. i :
t
1,2
= 1
t
_
cos
_
:
2
t
_
i sin
_
:
2
t
__
(96)
=
_
cos
_
:
2
t
_
i sin
_
:
2
t
__
(97)
and
B
1
=
_
1 1
i i
_
. (98)
49
The solution is
x
t
=
_
1 1
i i
_ _
i
t
0
0 (i)
t
_ _
c
1
c
2
_
+
_
1
0
_
(99)
=
_
i
t
(i)
t
i
t+1
(i)
t+1
_ _
c
1
c
2
_
+
_
1
0
_
(100)
=
_
c
1
i
t
+c
2
(i)
t
c
1
i
t+1
+c
2
(i)
t+1
_
+
_
1
0
_
(101)
=
_
_
c
1
__
cos
_
2
t
_
+i sin
_
2
t
___
+c
2
_
cos
_
2
t
_
i sin
_
2
t
__
c
1
__
cos
_
2
(t + 1)
_
+i sin
_
2
(t + 1)
___
+c
2
_
cos
_
2
(t + 1)
_
i sin
_
2
(t + 1)
__
_
_
(102)
+
_
1
0
_
(103)
=
_
(c
1
+c
2
)
_
cos
_
2
t
_
+ (c
1
c
2
) i sin
_
2
t
__
(c
1
+c
2
)
_
cos
_
2
(t + 1)
_
+ (c
1
c
2
) i sin
_
2
(t + 1)
__
_
+
_
1
0
_
(104)
_
~ c
1
_
cos
_
2
t
_
+ ~ c
2
sin
_
2
t
__
~ c
1
_
cos
_
2
(t + 1)
_
+ ~ c
2
sin
_
2
(t + 1)
__
_
+
_
1
0
_
. (105)
4.3.1 Non-invertible eigenvectors
If some roots are repeated B
1
may be non-invertible. In this case we cannot
use the diagonalization method. Instead we can use the existence of a higher
order single dierence equation that is equivalent to the system of rst order
dierence equations we want to solve. This is exactly analogous to the case
of dierential equations.
Look at the following example:
x
t+1
=
_
1 1
4 3
_
x
t
(106)
The eigenvalues of the coecient matrix are both 1 and the matrix of
eigenvectors are non-invertible. Now we transform the system into a second
order single dierence equation.
From the rst row we have that
r
1,t+1
= r
1,t
+r
2,t
(107)
r
2,t
= r
1,t+1
+r
1,t
. (108)
r
2,t+1
= r
1,t+2
+r
1,t+1
(109)
50
Using the second row,
r
2,t+1
= 4r
1,t
+ 3r
2,t
(110)
r
1,t+2
+r
1,t+1
= 4r
1,t
+ 3 (r
1,t+1
+r
1,t
) (111)
0 = r
1,t+2
2r
1,t+1
+r
1,t
(112)
Note that the polynomial associated with this second order dierence
equation is identical to the characteristic equation of the coecient matrix
in (106). Consequently, they have the same (repeated) root 1. The solution
to (110) is
r
1,t
= (c
1
+tc
2
) 1
t
= (c
1
+tc
2
) (113)
With knowledge of r
1,0
and r
1,1
, we have
r
1,t
= (r
1,0
+t (r
1,1
r
1,0
)) (114)
By using (107), we can express the solution in terms of r
1,t
and r
2,t
r
1,t
= r
1,0
+t (r
1,1
r
1,0
) (115)
r
2,t
= r
1,t+1
+r
1,t
(116)
= 2r
1,0
+ (2t + 1) (r
1,1
r
1,0
) (117)
_
r
1,t
r
2,t
_
=
_
1 t
2 2t + 1
_ _
r
1,0
r
1,1
r
1,0
_
. (118)
51
5 Dynamic Optimization in Discrete Time
5.1 Non-Stochastic Dynamic Programming
Consider the dynamic problem
max
fI
t
,c
t
g
T
t=1
T
t=1
n(/
t
. c
t
. t) (1)
s.t. /
1
= / (2)
/
t+1
= , (/
t
. c
t
. t) . t = 1.... 1. (3)
/
T+1
=
/ (4)
Before trying to solve this, notice
1. Per period payo is additive over time.
2. /
t
cannot be changed in period t, but its future values, its law-of-motion
can be changed by c
t
. We will call / a state variable (to be more properly
dened later) and c a control variable. A sequence /
1
. /
2
. .... /
T+1
is said
to be admissible if and only if it satises the constraints (2-4) for some
sequence c
1
. c
2
. ...c
T
.
A direct way to solve this would be to form the Lagrangian
1 =
T
t=1
n(/
t
. c
t
. t) +
T
t=1
`
t
(, (/
t
. c
t
. t) /
t+1
) (5)
+j
1
(/ /
1
) +j
T+1
_
/
T+1
/
_
(6)
with rst order conditions
n
I
(/
t
. c
t
. t) +`
t
,
I
(/
t
. c
t
. t) `
t1
= 0. \t = 2. ... 1. (7)
n
c
(/
t
. c
t
. t) +`
t
,
c
(/
t
. c
t
. t) = 0. \t = 1. ... 1. (8)
n
I
(/
1
. c
1
. 1) +`
1
,
I
(/
1
. c
1
. 1) j
1
= 0. (9)
`
T
+j
T+1
= 0. (10)
and (2-4).
This works, at least in principle, if 1 is nite. An alternative way is to
recognize that in a problem like this, each sub-section of the path must be
52
optimal in itself. This means that the problem has a recursive formulation
i.e., it can be set up sequentially. We can thus solve the problem backwards
starting from the last period. In any period, the remaining problem only
depends on earlier actions through the inherited value of /.
For example, if the problem is over three periods (1 = 3) we can rewrite
(1)
max
c
1
,I
2
j
k
1
_
n(/
1
. c
1
. 1) + max
c
2
,I
3
j
k
2
_
n(/
2
. c
2
. 2) + max
c
3
,I
4
j
k
3
n(/
3
. c
3
. 3)
__
(11)
s.t. /
t+1
= , (/
t
. c
t
. t) . t = 1.... 3 (12)
/
4
=
/ (13)
In the nal period (1 = 3), the problem is trivial; simply set c
3
so that
/
4
=
/. The value of c
3
that solves
/ = , (/
3
. c
3
. 3) is a function of /
3
.
5
Denote that function c
3
(/
3
) . We can then dene
n(/
3
. c
3
(/
3
) . 3) = \ (/
3
. 3) . (14)
The interpretation of \ (/
3
. 3), is the maximum remaining pay-o in
period 3, being a function of the state variable /
3
.
In period 2, we then want to solve
max
c
2
,I
3
(n(/
2
. c
2
. 2) +n(/
3
. c
3
(/
3
) . 3)) (15)
s.t. /
3
= , (/
2
. c
2
. 2) . (16)
Using \ (/
3
. 3) .we can write this
max
c
2
(n(/
2
. c
2
. 2) +\ (, (/
2
. c
2
. 2) . 3)) (17)
The solution and the maximized value depends on /
2
only and we called the
latter the value function and /
2
the state variable. We can then dene
\ (/
2
. 2) = max
c
2
(n(/
2
. c
2
. 2) +\ (, (/
2
. c
2
. 2) . 3)) . (18)
The interpretation of this Bellman equation is straightforward. It says
that the maximum remaining pay-o in period 2, being a function of /
2
, is
identically (i.e., for all /
2
) equal to the maximum over the control in period
2, c
2
. over period 2 pay-o and the maximum remaining pay-o in period 3
with period 3 state variable given by , (/
2
. c
2
. 2) .
5
For now, we just assume it is unique.
53
The trade-o between generation of current and future pay-o is opti-
mized by one simple FOC
n
c
(/
2
. c
2
. 2) +\
I
(, (/
2
. c
2
. 2) . 3) ,
c
(/
2
. c
2
. 2) = 0. (19)
Finally, in the rst period,
\ (/
1
. 1) = max
c
1
(n(/
1
. c
1
. 1) +\ (, (/
1
. c
1
. 1) . 2)) . (20)
If we know the value functions, the multidimensional problem has become
much simpler. The Bellman equation provides a way of verifying that the
value function we use is correct. It is of course straightforward to extend the
analysis to any nite horizon problem, yielding
\ (/
t
. t) = max
c
t
(n(/
t
. c
t
. t) +\ (/
t+1
. t + 1)) (21)
s.t. /
t+1
= , (/
t
. c
t
. t) (22)
5.1.1 Discounting and the Current Value Bellman equation
Very often in macroeconomics, the objective function is a discounted sum of
pay-os, i.e., (1) can be written
max
fcg
T
t=1
T
t=1
,
t1
n(/
t
. c
t
. t) . (23)
In this case, it is convenient to work with current value functions, \ (/. t)
which should be interpreted as the maximum remaining value that can be
achived from time t and onward, given /
t
and seen from period t. In other
words, given a problem
max
fcg
T
t=1
T
t=1
,
t1
n(/
t
. c
t
. t) (24)
s.t. /
t+1
= , (/
t
. c
t
. t) . t = 1.... 1 (25)
/
1
= / (26)
/
T+1
=
/
we dene for any t 1. .... 1
\ (/
t
. t) = max
fc,Ig
T
s=t
T
c=t
,
ct
n(/
c
. c
c
. :) (27)
s.t. /
t+1
= , (/
t
. c
t
. t) . t = :.... 1. (28)
/
T+1
=
/.
54
From this follows that
\ (/. t) = ,
(t1)
\ (/. t) .
Using this in the Bellman equation, (where I substitute for /
t+1
from the
law-of-motion) we get the current value Bellman equation
\ (/
t
. t) = max
c
t
_
,
t1
n(/
t
. c
t
. t) +\ (, (/
t
. c
t
. t) . t + 1)
_
(29)
,
t1
\ (/. t) = max
c
t
_
,
t1
n(/
t
. c
t
. t) +,
t
\ (, (/
t
. c
t
. t) . t + 1)
_
(30)
\ (/. t) = max
c
t
(n(/
t
. c
t
. t) +,\ (, (/
t
. c
t
. t) . t + 1)) (31)
In practice, the current value Bellman equation is the most used variant in
macroeconomics and, therefore, you will often see the word current dropped
and (31) is simple referred to as the Bellman equation and \ (/. t) is referred
to as the value function.
5.1.2 Innite Horizon and Autonomous Problems
In an innite horizon problem we cannot use the method of starting from
the last period. Still, if the problem has a well-dened value function, it
satises the Bellman equation. Furthermore, under conditions we will talk
about later, there is only one function that solves the Bellman equation, so if
we nd one function that solves the Bellman equation, we have a solution to
the dynamic optimization problem. Since geometric discounting will prove
to be important for showing uniqueness, we will use that from now on.
To nd a solution, we will use two dierent approaches.
1. Guess on a value function and verify that it satises the Bellman equa-
tion.
2. Iterate on the Bellman equation until it converges.
Guessing is often feasible when the problem is autonomous (stationary).
Then, the problem is independent of time in the sense that given an initial
condition on the state variable(s), the solution and the maximized objective
is independent of the starting date. A problem is autonomous if
1. Time is innite,
2. the law of motion for the state (including constraints on the control) is
independent of time, and
55
3. the per-period return function is the same over time, except for possibly
a geometric discount factor, i.e., n(/. c. t) = ,
t
n(/. c).
(Think about what would happen if any of these conditions is not satis-
ed). In this case, the current value function turns out to be independent
of time.
6
We can then write the current Bellman equation in terms of the
current value function
\ (/
t
) = max
c
t
(n(/
t
. c
t
) +,\ (/
t+1
)) (32)
s.t. /
t+1
= , (/
t
. c
t
) . (33)
Suppose we nd a solution to the maximization problem in the RHS of
(32). This will be a time invariant function c(/), since n. \ and , are time-
invariant. Plugging c (/) into (32), we get rid of the max-operator:
\ (/) = n(/. c (/)) +,\ (, (/. c (/))) (34)
If (34) is satised for all values of admissible k, we have a solution to the
value function, otherwise our guess was incorrect.
Note that (34) is a functional equation, i.e., the LHS and RHS have to
be the same functions. It is convenient to dene the RHS as
1(\ (/)) = max
c
n(/. c) +,\ (, (/. c)) (35)
where 1 operates on functions rather than on values.
7
In the autonomous
case, when the value function is unchanged over time, the Bellman equation
denes a xed point for 1 in the space of functions \ ;
\ (/) = 1 (\ (/)) . (36)
(36) means that if we plug in some function of / in the RHS of (36)
we must get out the same function on the LHS. The Bellman equation is a
necessary condition for \ (/) being a correctly specied value function, we
will later discuss conditions under which it is also sucient.
Typically the value function and the objective function are of similar
form. This is intuitive in the light of (34). For example, if the n function is
logarithmic, we guess that the value function is of the logarithmic form too.
For HARA utility functions (e.g., CRRA, CARA and quadratic) the value
functions are generally of the same type as the utility function (Merton,
1971).
6
The present value function \(/, t) is not independent ot time, but separable so that
we can write \ (/, t) = \ (/) ,
t1
.
7
See section 9 for more on the T-mapping.
56
5.1.3 An example of guessing
In the problem (1) let time be innite and
n(/. c. t) = ,
t
ln(c). (37)
,(/. c. t) = /
c
c. 0 < c < 1. (38)
and let the end-condition /
T+1
=
/ be replaced by /
t
0 \t.
This is an autonomous problem, so we have
\ (/
t
) = max
c,I
(n(c
t
) +,\ (/
t+1
)) (39)
s.t. /
t+1
= , (/
t
. c
t
) . (40)
= \ (/
t
) = max
c
(ln c
t
+,\ (/
c
t
c
t
)) (41)
Now, guess that \ is of the same form as n, here A ln /
t
+ 1 , for some
unknown constants A and 1. giving rst order conditions
n
0
(c
t
) = ,\
0
(/
c
t
c
t
) (42)
1
c
t
= ,
A
/
c
t
c
t
(43)
= c
t
=
/
c
t
1 +,A
= c (/
t
) . (44)
/
t+1
= /
c
t
c
t
= /
c
t
/
c
t
1 +,A
=
,A
1 +,A
/
c
t
(45)
Plugging c (/
t
) into the Bellman equation yields
A ln /
t
+1 = ln
/
c
t
1 +,A
+,
_
A ln
,A
1 +,A
/
c
t
+1
_
(46)
= ln
/
c
t
1 +,A
+,
_
A ln
/
c
t
1 +,A
+A ln ,A +1
_
(47)
= (1 +,A) ln
/
c
t
1 +,A
+, (A ln ,A +1 ) (48)
= c(1 +,A) ln /
t
(1 +,A) ln (1 +,A) (49)
+,A ln ,A +,1.
This is true for all values of /, if and only if
A = c(1 +,A) (50)
1 = (1 +,A) ln (1 +,A) +,A ln ,A +,1. (51)
57
giving
A =
c
1 c,
(52)
1 =
,cln (,c) + (1 ,c) ln (1 c,)
(1 ,) (1 c,)
(53)
= \ (/) =
c
1 c,
ln / +
,cln (,c) + (1 ,c) ln (1 c,)
(1 ,) (1 c,)
. (54)
Having \ (/) . it is easy to nd the optimal control, or the policy rule,
c (/
t
) =
/
c
t
1 +,
c
1co
= (1 c,) /
c
t
. (55)
ln /
t+1
= ln c, +cln /
t.
(56)
5.1.4 Iteration
As mentioned above, an alternative way to nd the innite horizon value
function is to nd the limit of nite horizon Bellman equation as the horizon
goes to innity. Under for our purposes quite general conditions this limit
exists and is equal to the value function for the innite horizon problem.
Let us change notation slightly, measuring time as the number of remaining
periods : until the nal period. We then denote the (current) value function
with : periods left by
\ (/. :). (57)
We also assume geometric discounting and that both pay-os and the law-
of-motion for the state variable are time-independent (n(/. c. :) = n(/. c) and
, (/. c. :) = , (/. c)) so that the innite horizon problem is autonomous. If
the following limit is well-dened, we denote
lim
c!1
\ (/. :) = \ (/) . (58)
The iteration method is usually done numerically, but it can (at some
cost of messiness) be done also analytically. Using the 1 operator and the
Bellman equation, we nd the limit in the following way
\ (/. :) = max
c
s
(n(/
c
. c
c
) +,\ (, (/
c
. c
c
) . : 1)) (59)
= 1 (\ (/. : 1)) (60)
\ (/. :) = 1
c
\ (/. 0) (61)
\ (/) = lim
c!1
\ (/. :) = lim
c!1
1
c
\ (/. 0) . (62)
58
If the limit exists, it clearly satises the Bellman equation
\ (/) = 1 (\ (/)) (63)
lim
c!1
1
c
\ (/. 0) = 1
_
lim
c!1
1
c
\ (/. 0)
_
= lim
c!1
1
c+1
\ (/. 0) . (64)
The remaining issue is what function \ (/. 0) to start the iteration with.
However, suppose that we can show that the limit \ (/) satises
lim
c!1
,
c
\ (/
c
) = 0 (65)
for ALL admissable values of /
t+c
that can be reached, given relevant initial
conditions and other constraints. Then, it can easily be shown that the
Bellman equation is sucient. Then, lim
c!1
1
c
\ (/. 0) provides a unique
solution to the Bellman equation. This means that the limit is independent
of the choice of \ (/. 0) . As we see from (65), , < 1 and \ (/) bounded are
sucient for uniqueness. Intuitively, if , < 1 and pay-os are bounded, the
pay-o in the innite horizon has no impact on the value function. Let us
revert to measure time in the usual way. Then, if the Bellman equation is
satised, we have
\ (/
t
) = max
c
t
(n(/
t
. c
t
) +,\ (/
t+1
)) (66)
s.t. /
t+1
= , (/
t
. c
t
)
max
c
t
_
n(/
t
. c
t
) +,
_
max
c
t+1
(n(/
t+1
. c
t+1
) +,\ (/
t+2
))
__
(67)
s.t./
t+1
= , (/
t
. c
t
) . /
t+2
= , (/
t+1
. c
t+1
)
= max
fc
t+n
g
1
0
1
a=0
,
a
n(/
t+a
. c
t+a
) +,
2
\ (, (/
t+1
. c
t+1
)) . (68)
s.t./
t+1
= , (/
t
. c
t
) . /
t+2
= , (/
t+1
. c
t+1
) (69)
Repeating this, and taking the limit yields
\ (/
t
) = max
fc
t+n
g
s
0
c
a=0
,
a
n(/
t+a
. c
t+a
) +,
c+1
\ (/
t+a+1
) . (70)
s.t./
t+a+1
= , (/
t+a
. c
t+a
) \: 0. : (71)
\ (/
t
) = max
fc
t+n
g
1
0
1
a=0
,
a
n(/
t+a
. c
t+a
) + lim
c!1
,
c+1
\ (/
t+c+1
) . (72)
s.t./
t+a+1
= , (/
t+a
. c
t+a
) \: _ 0. (73)
59
Now, if lim
c!1
,
c+1
\ (/
t+c+1
) = 0. for all permissible paths of / we have
showed that
\ (/
t
) = max
fc
t+s
g
1
0
1
c=0
,
c
n(/
t+c
. c
t+c
) (74)
s.t. /
t+c+1
= , (/
t+c
. c
t+c
) \: _ 0, given/
t
. (75)
i.e., that the Bellman equation implies optimality.
The iteration can easily be done numerically, either by specifying a func-
tional form if we know that, or by just choosing a grid. In the latter case we
assume that the state variable must belong to a nite set of values, say for ex-
ample that in every period / must be chosen from the set K =/
1
. /
2
. .... /
a
.
Then, we can compute the corresponding set of possible controls, c
n,a
C
from the equation
/
n
= , (/
a
. c
n,a
) . (76)
Then, in each iteration, we solve the Bellman equation for each / K,
giving for iteration :
\ (/
a
. :) = max
c
m;n
2C
(n(/
a
. c
n,a
) +,\ (/
n
. : 1)) . (77)
This goes quickly on a computer and the iteration is repeated until \ (/. :)
is suciently close to \ (/. : 1) over the set of / K.
5.1.5 An envelope result
We will later have use for the following envelope result, which implies that
we evaluate d\ (/) ,d/ as the partial derivative holding c constant. To see
this, note that
d\ (/
t
)
d/
t
= \
0
(/
t
) =
Jn(/
t
. c
t
)
J/
t
+,\
0
(/
t+1
)
J, (/
t
. c
t
)
J/
t
(78)
+
dc
t
d/
t
_
Jn(/
t
. c
t
)
Jc
t
+,\
0
(/
t+1
)
J, (/
t
. c
t
)
Jc
t
_
. (79)
However, the Bellman equation implies that the last term above is zero if
c is chosen optimally, either because the rst-order condition for an interior
maximum is satised
0&(I,c)
0c
+,\
0
(/
t+1
)
0)(I
t
,c
t
)
0c
t
= 0, or in the case of a corner
because
oc
t
oI
t
= 0.
This envelope result is often very useful. Consider the example
n(/
t
. c
t
) = (c
t
) . (80)
, (/
t
. c
t
) = (/
t
) c
t
. (81)
60
The interior solution to the Bellman equation satises
0 =
0
(c
t
) +,\
0
(/
t+1
)
J, (/
t
. c
t
)
Jc
t
. (82)
0
(c
t
) = ,\
0
(/
t+1
) . (83)
The envelope condition yields
\
0
(/
t
) =
Jn (/
t
. c
t
)
J/
t
+,\
0
(/
t+1
)
J, (/
t
. c
t
)
J/
t
(84)
=
0
(c
t
)
0
(/
t
) . (85)
\
0
(/
t+1
)
0
(c
t+1
)
0
(/
t+1
) . (86)
Using this in (82) yields the Euler equation
0
(c
t
) = ,
0
(c
t+1
)
0
(/
t+1
) . (87)
5.1.6 State Variables
We often solve the dynamic programming problem by guessing a form of the
value function. The rst thing to determine is then which variables should
enter, i.e., which variables are the state variables. The state variables must
satisfy both following conditions:
1. To enter the value function at time they must be realized at t.
Note, however, that it sometimes may be convenient to use a conditional
expectation 1
t
(.
t+c
) as a state variable. The expectation as of t is certainly
realized at t even if the stochastic variable .
t+c
is not realized.
2. The set of variables chosen as state variables must together give su-
cient information so that the value of the program from t and onwards when
the optimal control is chosen can be calculated.
8
Note, we should try to nd the smallest such set. If, for example we have
an investment problem with several assets to invest in and without any costs
of adjusting the portfolio, total wealth may be a sucient as a state variable.
5.2 Stochastic Dynamic Programming
As long as the recursive structure of the problem is intact adding a stochastic
element to the transition equation does not change the Bellman equation.
8
Can you gure out what do we need if the per period utility function in (1) were
n(c
t
, c
t1
)?
61
Consider the problem
max
fc
t
g
1
0
1
0
1
t=0
,
t
n(/
t
. c
t
) (88)
s.t. /
t+1
= , (/
t
. c
t
.
t+1
) \t _ 0. /
0
given, and (89)
/
t
0 \t. (90)
where 1
0
is the expectations operator, conditional on time 0 information and
we assume that c
t
can be chosen conditional on information about
c
for all
: _ t. Furthermore, let us assume that the distribution of
t
is i.i.d. over
time. Then, the Bellman equation becomes
\ (/
t
) = max
c
t
(n(/
t
. c
t
) +,1
t
\ (, (/
t
. c
t
.
t+1
))) . (91)
with a rst-order condition
0 = n
c
(/
t
. c
t
) +,1
t
(\
0
(/
t+1
) ,
c
(/
t
. c
t
.
t+1
)) . (92)
Note that, in general 1
t
(\
0
(/
t+1
) ,
c
(/
t
. c
t
.
t+1
)) ,= 1
t
\
0
(/
t+1
) 1
t
,
c
(/
t
. c
t
.
t+1
).
5.2.1 A Stochastic Consumption Example
Consider the following problem
max
fc
t
g
1
0
1
0
1
t=0
,
t
ln c
t
(93)
s.t.
t+1
= (
t
c
t
) (1 + ~ :
t+1
) \t _ 0. (94)
t
_ 0, \t _ 0,
0
given. (95)
The consumer decides how much to consume each period. The sav-
ings is placed in a risky asset with gross return (1 + ~ :
t+1
), that is drawn
from an i.i.d. distribution with 1 (ln (1 + ~ :
t+1
)) = : . If, for ex-
ample the gross return is log-normal, with mean : and variance o
2
then,
1 (1 + ~ :
t+1
) = c
n+
2
2
.
The problem is autonomous so we write the current value Bellman equa-
tion with time independent value function \
\ (
t
) = max
c
t
ln c
t
+,1
t
\ ((
t
c
t
) (1 + ~ :
t+1
)) . (96)
The necessary rst order condition for c
t
yield
1
c
t
= ,1
t
\
0
(
t+1
) (1 + ~ :
t+1
) . (97)
62
Now we use Mertons result and guess that the value function is
\ (
t
) = 1 +A ln
t
. (98)
for some constants 1 and A. Substituting into (97), we get
1
c
t
= ,1
t
A
t+1
(1 + ~ :
t+1
) . (99)
= ,1
t
A (1 + ~ :
t+1
)
(
t
c
t
) (1 + ~ :
t+1
)
. (100)
= ,
A
t
c
t
. (101)
c
t
=
t
1 +,A
. (102)
t
c
t
=
,A
t
1 +,A
. (103)
Now we have to solve for the constant A. This is done by substituting
the solutions to the rst order conditions and the guess into the Bellman
equation,
1 +A ln
t
= max
c
t
ln c
t
+,1
t
\ ((
t
c
t
) (1 + ~ :
t+1
)) (104)
= ln
t
1 +,A
+,1
t
\
_
,A
t
1 +,A
(1 + ~ :
t+1
)
_
(105)
= ln
t
1 +,A
+,1
t
_
1 +A ln
_
,A
t
1 +,A
(1 + ~ :
t+1
)
__
(106)
= (1 +,A) ln
t
(1 +,A) ln (1 +,A) +,1 (107)
+,A ln ,A +,A:. (108)
This is satised for all
t
i
A = (1 +,A) (109)
A =
1
1 ,
. (110)
1 =
1
1 ,
ln
_
1
1 ,
_
+,1 +,
1
1 ,
ln ,
1
1 ,
+,
:
1 ,
(111)
1 =
1
1 ,
ln
1
1 ,
+
,
(1 ,)
2
(:+ ln ,) . (112)
63
Thus,
c
t
=
t
1 +,
1
1o
= (1 ,)
t
. (113)
t
c
t
= ,
t
. (114)
t+1
=
t
, (1 + ~ :
t+1
) . (115)
ln
t+1
= ln
t
+ ln , + ln (1 + ~ :
t+1
) (116)
1
t
ln
t+1
= ln
t
+ ln , +:. (117)
Note that since ln (1 + ~ :
t+1
) is normally distributed, (1 + ~ :
t+1
) 0. for all
t. implying
t
0 for all t. If, on the other hand, (1 + ~ :
t+1
) can be negative
with positive probability, 1
t
ln (1 + ~ :
t+1
) is minus innity implying that the
value function is ill-dened.
5.3 Contraction mappings
In the previous section we discussed guessing on solutions to the Bellman
equation. However, we would like to know whether there exists a solution
and whether it is unique. If the latter is not the case, it is not in principle
sucient to guess and verify, since we might have other value functions that
also satisfy the Bellman equation. To prove existence and uniqueness we will
apply a contraction mapping argument.
9
For this purpose, we rst have to
dene some concepts.
5.3.1 Complete Metric Spaces and Cauchy Sequences
Let Xbe a vector space, i.e., a set on which addition and scalar multiplication
is dened. Also dene an operator d: which we can think of as measuring
the (generalized) distance between any two elements of X. We call d a norm
assumed to satisfy
1. Positivity \r. X, d (r. ) _ 0 and d (r. ) = 0 =r = .
2. Symmetry \r. X, d (r. ) = d (. r) .
3. Triangle inequality \r. . . X, d (r. .) _ d (r. ) +d (. .)
Now, we call (X. d) a normed vector space or a metric space. An example
of such a space would be R
a
together with the Euclidean norm d (r. ) =
9
An alternative is sometimes to look for the limit lim
s!1
T
s
(\ (/, 0)), which typically
is the solution we are interested in (at least in macroeconomics).
64
|r. |. Another example is the space C(S) of continuous, bounded func-
tions where each element is a function from S _ R
a
R together with
the sup-norm dened as follows. For any two elements in C(S), i.e., any
two functions n(:) and (:), the distance d between them is the maximal
euclidean distance, i.e.,
d (n. ) = sup
c2S
|n(:) . (:)| (118)
Now let us dene a Cauchy sequence. Intuitively, this is a sequence
of elements r
a
in a space X that come closer and closer to each other,
using some particular norm. More precisely, r
a
is dened as a sequence
of elements in X such that for all 0, there exist a number :, such that
for all :. j _ :, d(r
n
. r
j
) < . An example of such a sequence would be
the sequence 1. 1,2. 1,3. which is a Cauchy sequence using the Euclidean
norm. A Cauchy sequence converges if there is an element X such that
lim
a!1
d(r
a
. ) = 0. It may, of course, be the case that the Cauchy sequence
does not converge to a point in X. An example would be if we let X be the
open interval (0. 1] and look at the Cauchy sequence 1. 1,2. 1,3. which is
in X but converges to zero which is not in X.
5.3.2 Complete metric spaces
Now we are ready to dene the complete metric space. This is a metric space
in which all Cauchy sequences converge to a point in the space.
5.3.3 Contraction Mapping
Consider a metric space (X. d) and look at an operator 1 that maps X
X. 1 is a contraction mapping by denition if there exists a non-negative
number j [0. 1). such that for all elements r. X.
d (1 (r) . 1 ()) _ jd (r. ) . (119)
where we note that j must be strictly smaller than one.
An example of contraction mapping would be a map in, say, scale 1 :
10000 put on top of a map in scale 1 : 1000 covering the same geographical
area. The norm can be the distance between the points on the map. Clearly,
(119) is satised for j = 0.1.
5.3.4 The Contraction Mapping Theorem
Now we can state the very important contraction mapping theorem.
65
Result 14 Consider a complete metric space (X. d) and let 1 : X X be
a contraction mapping. Then, 1 has one unique xed point r
X, i.e., the
solution to r = 1(r) always exists and is unique. Furthermore, the sequence
r
0
, 1(r
0
),1
2
(r
0
),...,1
a
(r
0
) converges to r
for all r
0
X.
There are theorems that can be used to show that 1 is a contraction
mapping.
Result 15 Let the state space S be a subset of R
a
and B(S) the set of all
continuous, bounded functions from S to R. Endowed with the sup-norm,
B(S), is a complete metric space. Let 1 be a map that maps all elements of
B(S) B(S). Then, 1 is a contraction mapping if
1. for any functions n(:). (:) B(S), the following holds; if n(:) _
(:)\: S then 1 (n(:)) _ 1 ((:)) \: S ( monotonicity), and
2. there is a , [0. 1) such that for any constant i 1, and any function
n(:) B(S) . 1 (n(:) +i) = 1 (n(:)) +,i. ( discounting).
Usually it is straightforward to apply the previous result to show that if
we have strict discounting, the Bellman equation is a contraction mapping.
There is one major limitation which we have to live with, however, result 15
and variants of it require bounded value functions.
Let us look at an example, where we apply result 15. Consider a simple
growth model,
max
fc
t
g
1
0
1
0
1
t=0
,
t
n(c
t
) (120)
s.t. /
t+1
= , (/
t
) c
t
. \t _ 0. /
0
given, and (121)
c
t
. /
t
0 \t. (122)
where n is a continuous and increasing (utility) function with n(0) _ n
min
and 0 _ , < 1. To use the theorems we need to make some assumptions.
First, we need boundedness. For this purpose, we assume
, (0) = 0. (123)
,
0
(/) _ 0\/ _ 0. (124)
/. (125)
Now, dene S = [0.
/
_
implying that any value
66
function must satisfy
&
min
1o
_ \ (/) _
&
(
I
)
1o
. By restricting the state space
S = [0.
/], we can therefore restrict our search for value functions that are
bounded on our state space.
Now, let us establish that the following mapping is a contraction:
\ (/) = max
c0
n(c) +,\ (, (/) c)) = 1 (\ (/)) (126)
Regarding condition 1, we need for any two bounded functions (/) . n(/) . /
o. .
(/) _ n(/) \/ =1 ( (/)) _ 1 (n(/)) \/. (127)
which is satised. To see this, dene
c
= arg max
c
n(c) +,n(, (/) c) (128)
then,
1 ( (/)) = max
c
n(c) +, (, (/) c) _ n(c
) +, (, (/) c
) (129)
_ n(c
) +,n(, (/) c
) = 1 (n(/)) . (130)
Regarding condition 2, we note
1 ( (/) +i) = max
c0
n(c) +, ( (, (/ c)) +i) (131)
= max
c0
n(c) +, (, (/ c)) +,i (132)
= 1 ( (/)) +,i. (133)
so the second condition is satised too. Thus, the Bellman equation is a
contraction mapping and always has one and one only solution, \ (/) .
67
6 Dynamic Optimization in Continuous Time
6.1 Dynamic programming in continuous time
Consider the problem
max
I(t)
T
0
,c(t)
T
0
_
T
0
c
vt
n(/. c. t) dt (1)
:.t.
_
/ = , (/. c. t) (2)
/ (0) = /. (3)
with
/ (1) =
/ (case 1), or (4)
/ (1) free (case 2), or (5)
/ (1) _
,
`(t) = \
I
(/
. t) . (13)
We see that a necessary condition for c
. t) +\
II
(/. t) , (/. c
. t) (15)
+\
I
(/. t) ,
I
(/. c
. t) . (16)
:`(t)
_
\
tI
(/. t) +\
II
(/. t)
_
/
_
= n
I
(/. c
. t) +`(t) ,
I
(/. c
. t) . (17)
:`(t)
_
`(t) = n
I
(/. c
. t) +`(t) ,
I
(/. c
. t) . (18)
where in second equation, we changed the order of dientiation, using the
fact that
0
0I
0\ (I,t)
0t
=
0
0t
0\ (I,t)
0I
.
Equation (14) and (18) form the basis of Pontryagins maximum principle.
According to this, we derive necessary (and sometimes sucient) conditions
for an optimal control. We do this without explicitly solving for the value
function. We rst dene the current value Hamiltoninan. This is the sum
of the instantaneous payo and the product of costate(s) and the function
determining the law-of-motion of the state variable;
H(/. c. `. t) =n(/. c. t) +`(t) , (/. c. t) . (19)
Note that the Hamiltonian has the same interpretation as the RHS of
the Bellman equation without the max-operator. In words, it is the sum
of the ow of current pay-o and the generation of future pay-o. In the
Bellman equation, we use the value function to measure future payos while
the Hamiltonian uses the shadow-value `.
69
According to Pontryagins maximum principle, the optimal control c
(t)
maximizes the Hamiltonian at each instant, the co-state (or shadow value)
satises the dierential equation, :`(t)
_
`(t) = H
I
(/. c. `. t). These nec-
essary conditions will provide dierential equations which we need to solve.
Typically we have one initial condition for each state variable. But we need
more information to solve the system since we also have the control vari-
able(s). In case (1) of (4), we have the necessary additional information. In
case (2), it must be that the shadow value of the state variable approach zero
as t 1. This is called, the transversality condition. In case (3), either the
inequality is slack, in which case `(1) = 0. or it binds, giving the necessary
additional info in both cases. We can the summarize:
Result 16 According to the Pontryagins maximum principle
1. An optimal control c
(t) . satises
c
/
_
= 0 if / (1) _
) = `. (31)
Furthermore, from the second condition
H
I
(/. c. `. t) = `,
0
(/) = :`
_
`. (32)
Taking time-derivatives of n
0
(c
) _ c
=
_
`. (33)
n
0
(c
) ,
0
(/) = :n
0
(c
) n
00
(c
) _ c
. (34)
_ c
=
n
0
(c
)
n
00
(c
)
(,
0
(/) :) . (35)
which is the Euler equation we have seen before. To analyze the behavior of
this system, we can use the phase-diagram as in section 3.5.
To get a closed form solution, i.e., an expression for the endogenous vari-
ables in terms of only the exogenous ones, we must specify the utility and
production functions. Considering rst the utility functions, we have two
important special cases. First, CARA utility,
n(c) =
c
c
. (36)
n
0
(c) = c
c
. n
00
(c) = c
c
(37)
in which case we get
_ c
=
1
(,
0
(/) :) . (38)
i.e., consumption increase is a linear function in the dierence between the
marginal return on savings and the subjective discount rate. The other case
is CRRA,
n(c) =
c
1o
1o
. (39)
n
0
(c) = c
o
. n
00
(c) = oc
o1
. (40)
71
yielding
_ c
=
c
o
oc
o1
(,
0
(/) :) =
c
o
(,
0
(/) :) (41)
_ c
c
=
1
o
(,
0
(/) :) . (42)
i.e., the consumption growth rate is a linear function in ,
0
(/):. The sensitiv-
ity is given by 1,o, which we call the intertemporal elasticity of substitution.
Here, we also have that o is the constant of relative risk aversion.
What about the transversality condition? In this case, we known
0
(c
(1)) =
`(1) . So since utility in the examples is unbounded, i.e., n
0
(c) 0 for all
nite c. `(1) cannot be 0, instead / (1) is zero. In other words, whenever
consumption is valuable at 1, the lower bound on / should bind and nothing
should be left.
Let us complete the example by assuming, for simplicity, a linear (Romer
type) production function , (/) = /. In the CRRA case, we get the linear
system
_ c
=
:
o
c (43)
_
/ = c +/ (44)
_
_ c
_
/
_
=
_
v
o
0
1
_ _
c
/
_
. (45)
This system has roots and
v
o
and the matrix of eigenvectors is
_
0
v+(o1)
o
1 1
_
= B
1
. (46)
Consequently, the solution is of the form
_
c
(t)
/ (t)
_
= B
1
_
c
t
0
0 c
Ar
t
_ _
i
1
i
2
_
(47)
=
_
0
v+(o1)
o
c
Ar
t
c
t
c
Ar
t
_
_
i
1
i
2
_
. (48)
where i
1
and i
2
are two integration constants. We solve for the latter by
72
using / (0) = / and / (1) = 0.
/ (0) = / =
_
1 1
_
i
1
i
2
_
(49)
= i
1
+i
2
(50)
/ (1) = 0 =
_
c
T
c
Ar
T
_
_
i
1
i
2
_
(51)
i
1
=
c
Ar
T
c
Ar
T
1
/. i
2
=
1
1 c
Ar
T
/. (52)
We can now, for example, evaluate
c
(t) =
: +(o 1)
o
c
Ar
t
1
1 c
Ar
T
/. (53)
c
(0) =
: +(o 1)
o
1
1 c
Ar
T
/. (54)
6.3 Suciency
Assume that , and n are concave in /. c and ` _ 0. This implies that the
Hamiltonian is concave in /. c. Then, Pontryagins necessary conditions (20)
and (21) and (22), or (23) or (24) are sucient.
6.4 Innite horizon
Consider the innite horizon problem
max
c(t)
1
0
_
1
0
c
vt
n(/. c. t) dt (55)
:.t.
_
/ = , (/. c. t) (56)
/ (0) = /. (57)
Pontryagins conditions (20) and (21) are necessary also in the innite
horizon case, provided, of course, that there is a well dened solution. If there
is a binding restriction on the state variable of the type lim
T!1
/ (1) =
/,
this can help us pin down the solution. The nite horizon transversality
conditions can, however, not immediately be used in the innite horizon
case. Suppose the maximized Hamiltonian is concave in / for every t, then
73
the conditions (20) and (21) plus the innite horizon transversality conditions
lim
T!1
c
vT
`(1) / (1) = 0. and (58)
lim
T!1
c
vT
`(1) _ 0. (59)
provide a sucient set of conditions for optimality (see de la Fuente, 1999,
p 577). Often, the Hamiltonian is concave in /. c together. This is sucient
for the maximized Hamiltonian to be concave in /.
Sometimes a so called No-Ponzi condition helps us to make sure that
the transversality conditions are satised. Suppose, for example, the pay-
o n(/. c. t) = n(c), that / represents debt of the agent and for simplicity
that , (/. c. t) = c + j/ n, so debt increases by the dierence between
consumption plus interest payments j/ and the wage n. It is reasonable to
assume that creditors demand to be repaid in a present value sense the
discounted value of future repayment should always be at least as large as
debt. This is the No-Ponzi condition. When in addition, the agent prefers
to pay back no more than he ows, the implication is.
lim
T!1
c
jT
/ (1) = 0. (60)
To see this, solve
_
/ (t) j/ (t) = c (t) n(t) (61)
giving
c
jt
_
_
/ (t) j/ (t)
_
= c
jt
(c (t) n(t)) (62)
c
jt
/ (t) =
_
t
0
_
c
jc
(c (t) n(t))
_
d: +/ (0) (63)
lim
T!1
c
jT
/ (1) =
_
1
0
_
c
jc
(c (t) n(t))
_
d: +/ (0) (64)
So, the No-Ponzi requirement is that if the PDV of "mortgage" repayments is
no smaller than initial debt, i.e.,
_
1
0
(c
jc
(c (t) n(t))) d: / (0) _ / (0) .
then lim
T!1
c
jT
/ (1) _ 0. Clearly, when marginal utility is strictly positive,
the individual would never want to satisfy this with inequality, since he could
then increase consumption. Therefore, lim
T!1
c
jT
/ (1) = 0.
The second necessary condition in (21) is now
(j :) `(t) =
_
`(t) (65)
`(t) = `(0) c
(jv)t
. (66)
74
So provided marginal utility is positive at t = 0, (59) is satised. Further-
more,
lim
T!1
c
vT
`(1) / (1) = `(0) lim
T!1
c
vT
c
(jv)T
/ (1) (67)
= `(0) lim
T!1
c
jT
/ (1) (68)
= 0. (69)
where the last equality is the No-Ponzi condition.
Sometimes, the sucient conditions allow us to identify the optimal con-
trol as the stable manifold (saddle-path) leading to a saddle-point stable
steady state. Consider again the problem
max
c(t)
1
0
_
1
0
c
vt
n(c) dt (70)
s.t.
_
/ = , (/) c. / (0) = /. (71)
which we graphically analyzed in section 3.5 showing the existence of saddle-
path and a steady state with
,
0
(/
cc
) = : (72)
c
cc
= , (/
cc
) (73)
Restating the current value Hamiltonian
H(/. c. `. t) = n(c) +`(t) (, (/) c) . (74)
we note that if both n(c) and , (/) are concave, and `(t) _ 0. H(/. c. `. t) is
concave in /. c so the conditions for using the suciency result are satised.
In addition to (41), (42), and / (0) = /, we thus only need to verify that (58)
and (59) are satised. This is straightforward,
lim
T!1
c
vT
`(1) = lim
T!1
c
vT
n
0
(c
cc
) = 0. (75)
lim
T!1
c
vT
`(1) / (1) = lim
T!1
c
vT
n
0
(c
cc
) /
cc
= 0. (76)
6.5 Present value Hamiltonian
Sometimes, it is convenient to dene the present value Hamiltonian, i.e.,
expressing everything in values as seen from time 0. In problem (1), the
present value Hamiltonian is given by
H(/. c. j. t) = c
vt
n(/. c. t) +j, (/. c. t) . (77)
75
where j(t) is the present shadow value of the state variable. In this case,
the necessary conditions for optimality are
c
i=1
`
i
,
i
(k. c. t) . (84)
where `
i
is the shadow value associated with the state variable /
i
. Each `
i
(t)
is continuous and satises the dierential equation
:`
i
(t)
_
`
i
(t) =
J
J/
i
H(k. c. . t) . (85)
except when c is discontinuous. For the transversality conditions, we have
/
i
(1) =
/
i
case (1), or (86)
`
i
(1) = 0 case (2), or (87)
`
i
(1) _ 0. and `
i
(1)
_
/
i
(1)
/
i
_
= 0 case (3), (88)
for end-conditions for state variable i belonging to case 1, 2 or 3.
76
7 Some numerical methods
7.1 Numerical solution to Bellman equations
When we cannot solve the Bellman equation analytically, there are several
methods to approximate a solution numerically. One of the most straight-
forward methods when the problem is autonomous, is to discretize the state
space and iterate on the Bellman equation until it converges. When the Bell-
man equation is a contraction mapping, strong results make sure that this
procedure converges to the correct value function.
When we discretize the state space, we restrict the state variable to take
values from a nite set
/
t
_
/
1
. /
2
. .../
a
= 1 (1)
where the superscripts index the elements of 1. We then solve for c from the
law-of-motion for /
/
t+1
= , (/
t
. c
t
) (2)
== c
t
=
, (/
t
. /
t+1
) (3)
We can then write the Bellman equation for the discretized problem as
\ (/
t
) = max
I
t+1
21
n
_
/
t
.
, (/
t
. /
t+1
)
_
+,\ (/
t+1
) . (4)
As you see, this is a Bellman for a constrained problem, i.e., the control
variable is constrained relative to the case when /
t+1
is continuous. Two
things should be noted; First, the Bellman equation is the true Bellman
equation of the constrained problem and previous results hold, in particular
the contraction mapping theorems apply. Second, how important the con-
straint implied by the discretization depends on how ne the grid is. Many
(few) elements of 1 with small (large) distances between them, imply that
the constraint is weak (severe).
Denoting an arbitrary initial value function by \
0
(/
t
) . being : numbers,
we update this value function according to
\
1
(/
t
) = max
I
t+1
21
n
_
/
t
.
, (/
t
. /
t+1
)
_
+,\
0
(/
t+1
) (5)
giving \
1
(/
t
) . being a new set of : numbers. We then iterate on the Bellman
equation
\
c+1
(/
t
) = max
I
t+1
21
n
_
/
t
.
, (/
t
. /
t+1
)
_
+,\
c
(/
t+1
) (6)
77
until \
c+1
(/
t
) - \
c
(/
t
) . For each of the : values of /
t
, we check n
_
/
t
.
, (/
t
. /
t+1
)
_
+
,\ (/
t+1
) for all : values of /
t+1
and choose the /
t+1
that gives the highest
value, giving \
c+1
(/
t
). Therefore, each iteration requires :
2
evaluations when
the state variable is unidimensional.
10
Lets consider a simple example.
max
fI
t+1
,c
t
g
1
t
1
t=0
,
t
ln (c
t
) (7)
s.t. /
t+1
= , (/
t
. c
t
) = /
c
t
+ (1 o) /
t
c
t
(8)
/
t
_ 0\t (9)
/
0
= / (10)
First, we solve for
c
t
=
, (/
t
. /
t+1
) = /
c
t
+ (1 o) /
t
/
t+1
. (11)
Then, we note that /
t
_
0. o
1
1
_
== /
t+1
_
0. o
1
1
_
, implying that
value function is bounded. If also , < 1. the Bellman equation
\ (/
t
) = max
c
t
ln (/
c
t
+ (1 o) /
t
/
t+1
) +,\ (/
t+1
) (12)
is a contraction mapping.
Let us parametrize, setting , = 0.9, o = .2 and c = 1,2 == o
1
1
= 25
and discretize the state space by requiring
/
t
[5. 10. 15. 20. 25] = 1 \t. (13)
Now set an initial value function, for example
\
0
(/) = ln / \/.
This is then updated in the following way. For each possible /
t
. /
t+1
we
calculate the left hand side of the Bellman equation, and solve the maximiza-
tion problem. So, for /
t
= 5. and all /
t+1
1 we have
ln (5
c
+ (1 o) 5 5) +.9 ln 5 = 1.66
ln (5
c
+ (1 o) 5 10) +.9 ln 10 =
ln (5
c
+ (1 o) 5 15) +.9 ln 15 =
ln (5
c
+ (1 o) 5 20) +.9 ln 20 =
ln (5
c
+ (1 o) 5 25) +.9 ln 25 =
10
When the state variable is higher of dimensionality, this method quickly becomes
computationally too burdensome.
78
Implying that the updated value function for /
t
= 5 is
\
1
(5) = 1.66.
For /
t
= 10.
ln (10
c
+ (1 o) 10 5) +.9 ln 5 = 3.27
ln (10
c
+ (1 o) 10 10) +.9 ln 10 = 2.22
ln (10
c
+ (1 o) 10 15) +.9 ln 15 =
ln (10
c
+ (1 o) 10 20) +.9 ln 20 =
ln (10
c
+ (1 o) 10 25) +.9 ln 25 =
implying
\
1
(10) = 3.27
In the same way, for /
t
= 15
ln (15
c
+ (1 o) 15 5) +.9 ln 5 = 3.83
ln (15
c
+ (1 o) 15 10) +.9 ln 10 = 3.84
ln (15
c
+ (1 o) 15 15) +.9 ln 15 = 2.30
ln (15
c
+ (1 o) 15 20) +.9 ln 20 =
ln (15
c
+ (1 o) 15 5) +.9 ln 25 =
\
1
(15) = 3.84
Doing this also for /
t
= 20 and /
t
= 25 completes the rst iteration. Then,
we repeat the iterations until we think the process has converged suciently,
7.2 Band Matrix Methods for dierential equations
Assume we want to solve the dierential equation
00
(t) +c
0
(t) +/ (t) = q (t) (14)
over some interval, with initial conditions given. If we want to solve this
numerically, we rst have to get rid of the abstract innitely small dierences
called dierentials. We approximate these with nite size forward dierences
such that
0
(t) -
(t + t) (t)
t
. (15)
0
(t) -
j(t+t)j(t)
t
j(t)j(tt)
t
t
(16)
=
(t + t) 2 (t) + (t t)
t
2
(17)
79
Using this we can solve the equation for a nite set of values in the
following way. Say we want to solve the equation in the interval t [j. ]
and we know
00
(0) = c
1
and
0
(0) = c
2
. We divide the interval for t into :
equal parts and use the following notation
t
I
= j +
/ ( j)
:
. (18)
t
I
t
I1
=
j
:
= t. (19)
(t
I
) =
I.
(20)
This gives us the following equations
(
1
2
0
+
1
)
t
2
= c
1
(21)
(
0
+
1
)
t
= c
2
(22)
(
1
2
0
+
1
)
t
2
+c
0
+
1
t
+/
0
= q (t
0
) (23)
(
0
2
1
+
2
)
t
2
+c
1
+
2
t
+/
1
= q (t
1
) (24)
. (25)
. (26)
(
a1
2
a
+
a+1
)
t
2
+c
a
+
a+1
t
+/
a
= q (t
a
) . (27)
This provides : + 3 linear equations for the : + 3 unknown . Writing
this as a system we have
A
_
1
.
a+1
_
_
=
_
_
c
1
c
2
q (t
0
)
q (t
1
)
.
q (t
a
)
_
_
(28)
_
1
.
a+1
_
_
= A
1
_
_
c
1
c
2
q (t
0
)
q (t
1
)
.
q (t
a
)
_
_
. (29)
80
with (setting : = 3)
A = (30)
2
6
6
6
6
6
6
6
6
6
6
6
6
4
t
2
2t
2
t
2
0 0 0
0 t
1
t
1
0 0 0
t
2
2t
2
ot
1
+b t
2
+ot
1
0 0 0
0 t
2
2t
2
ot
1
+b t
2
+ot
1
0 0
0 0 t
2
2t
2
ot
1
+b t
2
+ot
1
0
0 0 . t
2
2t
2
ot
1
+b t
2
+ot
1
3
7
7
7
7
7
7
7
7
7
7
7
7
5
To get any accuracy, we should of course set : much larger than 3. As we
see, the matrix A contains many zeros, with a band of non-zeros around the
diagonal. Due to this feature, it is easy for the computer to invert it also if
: is in the order of hundreds.
7.3 Newton-Raphson
Suppose we are looking for an optimumof the real valued function , (x) . x R
a
where , is twice dierentiable. A standard way to do this numerically is to
apply the Newton-Raphson algorithm. If the optimum is interior, it satis-
es the necessary rst order conditions that the gradient is zero, i.e., at the
optimum, denoted x
,
1, (x) = 0. (31)
Now apply a rst order linear approximation to the gradient from some
initial point x
0
0 = 1, (x
) - 1, (x
0
) +1
2
, (x
0
) (x
x
0
) . (32)
where 1
2
, (x
0
) is the Hessian matrix of second derivatives of ,.
Provided the Hessian is invertible, we can get an approximation to x
.
x
- x
0
_
1
2
, (x
0
)
_
1
1, (x
0
) . (33)
From this we can construct a search algorithm that under some circum-
stances makes better and better approximations
x
c+1
- x
c
_
1
2
, (x
c
)
_
1
1, (x
c
) . (34)
If we dont have analytic solutions to the gradient and Hessian we can
use numerical approximations, for example the forward dierence method;
81
For a small number , we have,
J, (x)
Jr
1
=
,
_
x +
_
0
__
, (x)
(35)
J,
2
(x)
Jr
2
1
=
,
_
x +
_
0
__
2, (x) +,
_
x
_
0
__
2
. (36)
J,
2
(x)
Jr
2
Jr
1
=
0)
0
B
B
B
@
x+
2
6
6
6
4
0
0
3
7
7
7
5
1
C
C
C
A
0a
1
0)(x)
0a
1
(37)
=
,
_
_
x +
_
_
0
_
_
_
_
,
_
_
_
_
0
0
_
_
_
_
,
_
_
x +
_
_
0
0
_
_
_
_
+, (x)
2
. (38)
One should be very careful with this method since it can only nd local
optima in the case when , is not globally concave. In well-behaved problems,
it is however, easily programmed and fairly quick.
82
8 Note on constant hazard models
Consider as an example a pool of unemployed, with measure (size) at time t
given by r(t). Suppose also that there is an outow of people from the unem-
ployment pool (we are for now disregarding the inow of new unemployed).
The outow is determined by an assumption that there is a constant prob-
ability per unit of time, denoted / to get hired. Using this we can derive a
law-of-motion for r (t). Over a small (innitesimal) interval of time dt, we
have
r (t +dt) = r (t) (1 /dt) (1)
r (t +dt) r (t) = r (t) /dt (2)
lim
ot!0
r (t +dt) r (t)
dt
= _ r (t) = /r(t) . (3)
This is a simple dierential equation with a solution
r (t) = r (0) c
It
. (4)
Now, consider an individual who is unemployed at time 0 and the random
variable : denote the time she will stay unemployed. Let us now derive
the probability density function , (:) and let 1 (:) denote the cumulative
distribution function, i.e., 1 (:) is the probability the unemployment spell is
no longer than :. Clearly,
1 (:) =
_
c
0
, (t) dt. (5)
From (4) we know that at time :, a share
r (t)
r (0)
= c
Ic
. (6)
remains unemployed and since hiring is completely random,
1
r (t)
r (0)
= 1 c
Ic
= 1 (:) . (7)
and consequently
, (:) = /c
Ic
. (8)
Let us also verify that the probability of nding a job per unit of time,
conditional on not having found it stays constant. This probability is
, (t)
1 1 (t)
=
/c
It
c
It
= /. (9)
83
We can now compute the average spell length as
_
1
0
:, (:) d: =
_
1
0
:/c
Ic
d:. (10)
Using the formula for integration by parts,
_
1
0
:/c
Ic
d: =
_
:c
Ic
1
0
_
1
0
c
Ic
d: (11)
= 0 0
_
c
Ic
/
_
1
0
=
1
/
. (12)
Similarly, we can compute the median length :, i.e., solving 1(:) = 1,2
from
1,2 = 1 c
In
(13)
c
In
= 1,2 (14)
/: = ln 2 (15)
: =
ln 2
/
-
.69
/
(16)
This is sometimes called the rule of 69; expressing / in percent per unit
of time. The half-life is found by dividing 69 by /. For example, if the
probability of nding job is 5% per week. It takes 69,5 - 14 weeks before
half the pool of unemployed have found jobs and the average unemployment
spell is 1,0.05 = 20 weeks.
Another example; it is often found that dierences in GDP between coun-
tries (after controlling for dierences is savings and schooling) is closing by
3% per year. Then, half the dierence is left after 69,3 = 23. The rule of 69
also works when something is growing. If a bank account yields 4% return
per year, it takes 69,4 - 17 years for it to double.
9 T-mappings
Instead of using time as an argument of the value function, lets use time
subscripts. We can then write the Bellman equation as
\
t
(/
t
) =
_
max
c
t
(n(/
t
. c
t
) +,\
t+1
(/
t+1
))
:.t./
t+1
= , (/
t
. c
t
)
Now, let us dene an operator that maps next period value functions. \
t+1
:
1 1. (where 1 is the state space, i.e., the set of possible values for the
84
state variable), into functions that provides the current value associated with
all /
t
in the state space. Thus, the operator, which we will call, 1. maps ele-
ments of the space of value functions, call that space C. back into the same
space, i.e., 1 : C C.
11
Formally, we dene the 1 mapping as
1\
t+1
: C C =
_
max
c
t
(n(/
t
. c
t
) +,\
t+1
(/
t+1
))
:.t./
t+1
= , (/
t
. c
t
)
.
When we want to indicate that the mapped function, 1\
t+1
. is a function of,
e.g., /
t
.we append (/
t
) ;
1\
t+1
= 1\
t+1
(/
t
) .
Note that while \
t+1
is a function of /
t+1
, 1\
t+1
is a function of /
t
.
Let us take an example. Suppose n(/
t
. c
t
) = ln c
t
and , (/
t
. c
t
) = /
t
c
t
\t. Lets us now see what the 1 operator does. Take a particular element in
C. for example ln / : 1
+
1. So here the state space 1 = 1
+
. Now,
1 ln /
t
=
_
max
c
t
(ln c
t
+, (ln /
t+1
))
:.t./
t+1
= /
t
c
t
The rst order condition is
1
c
t
=
,
/
t
c
t
c
t
=
/
t
1 +,
thus,
1 ln /
t
= ln
/
t
1 +,
+,
_
ln
_
/
t
/
t
1 +,
__
= ((1 + ,) ln /
t
+, ln , (1 +,) ln (1 +,))
which is another function 1
+
1. So, we see that the 1 maps functions
into (potentially) other functions, concluding the example.
Using the 1 operator, the Bellman equation in period t can be written
\
t
= 1\
t+1
.
or, equivalently,
\
t
(/
t
) = 1\
t+1
(/
t
) .
Furthemore, in the next period, the next periods Bellman equation is
\
t+1
= 1\
t+2
.
11
Later on, we will make assumptions such that we can further restrict the space of
possible valute functions.
85
Thus,
\
t
= 1\
t+1
= 1
2
\
t+2
.
The meaning of 1
2
\
t+2
in words is; give me (I am 1
2
) a value function
that applies in period t +2 (you dont need to say anything about what /
t+2
is going to be), and I give you a function that tells you the value in period t
associated with all values of /
t
in the state space. Formally;
1
2
\
t+2
=
_
max
c
t
(n(/
t
. c
t
) +, (1\
t+2
(/
t+1
))))
:.t./
t+1
= , (/
t
. c
t
)
=
_
max
c
t
_
n(/
t
. c
t
) +,
_
max
c
t+1
(n(/
t+1
. c
t+1
) +,\
t+2
(/
t+2
))
__
:.t./
t+1
= , (/
t
. c
t
) . /
t+2
= , (/
t+1
. c
t+1
)
.
Now, suppose in the time autonomuous case that we can nd a limiting
function
\ (/
t
) = lim
c!1
1
c
\
t+c
(/
t
)
then, as shown in the lecture notes, this function satises the Bellman equa-
tion
\ (/
t
) = 1\ (/
t
)
i.e., it is a xed point of the 1 operator. This means that in the the space
C, \ is an elements such that 1 maps back onto the same element. If 1 is
a contraction mapping on C, this element exists and is unique.
86