An Overview of The Hamilton-Jacobi Equation - 21
An Overview of The Hamilton-Jacobi Equation - 21
An Overview of The Hamilton-Jacobi Equation - 21
ALAN CHANG
1. Introduction
1.2. A word on notation. Notation used for partial differential equations varies greatly
across different texts. The conventions adopted by this paper are presented in Appendix A.
The reader is highly encouraged to skim through that appendix now, and to refer back to it
whenever necessary during the reading of this paper.
2. Hamiltonian Mechanics
One function, the Hamiltonian H : Rnp Rnx R, concisely and completely expresses the
constraints of the system, via Hamiltons equations:
p(t) = x H(p(t), x(t))
(2.1)
x(t) = p H(p(t), x(t))
Since H is given, this is an system of 2n ordinary differential equations, with p(t) and x(t)
as the unknowns. If initial data is given (for example, if p(0) = p0 and x(0) = x0 ), then the
ODE can be solved and we can describe the motion of the system.
Example 2.1. If we use rectangular coordinates, the Hamiltonian for a particle of mass m in
a force field is
1
H(p, x) = |p|2 + V (x)
2m
where V : Rnx R is the potential energy. Then equations (2.1) reduce to p = x V (x)
and x = m1 p. The first equation is a statement of Newtons second law F = ma. The second
equation relates the classical position and momentum vectors.
Remark 2.2. Observe that in Example 2.1, the Hamiltonian is equal to the total energy of
the system. This is no coincidence. To see why, fix a Hamiltonian H : Rnp Rnx R, and let
(p(t), x(t)) be a solution to Hamiltons equations. Then set E(t) = H(p(t), x(t)). We have
E(t) = p H(p(t), x(t)) p(t) + x H(p(t), x(t)) x(t) = 0
where the equality on the right is due to (2.1). Thus, the quantity E(t) is indeed conserved,
and the Hamiltonian admits an interpretation as energy.
However, a caution: we have defined a Hamiltonian as a function of p and x. If the
Hamiltonian is also time-dependent, that is, a function of p, x and t, then it is not that case
that H is conserved.
2It is true that L, like H, is a function of 2n independent variables. However, the resulting Euler-Lagrange
equation is a system of n second-order ODE whose unknown is x(t). In contrast, Hamiltons equations are
a system of 2n first-order ODE whose unknowns are p(t) and x(t).
AN OVERVIEW OF THE HAMILTON-JACOBI EQUATION 3
momentum to be p = v L(v, x). The Hamiltonian H is constructed from L via the Legendre
transform:
H(p, x) = sup {p v L(v, x)}
vRn
Then the Euler-Lagrange equation translates into Hamiltons equations. It is a fact that
the Legendre transform is its own inverse, so
L(v, x) = sup {p v H(p, x)}
pRn
We will not go into detail here, but this relationship between H and L will appear in
subsection 5.3 when we discuss the Hopf-Lax formula.
Figure 2.1. Left: Path in phase space. Right: Same path in extended phase space.
The form 1 is called the integral invariant of Poincare-Cartan. (The reader is invited to
read Appendix B for a review of differential forms.)
Lemma 2.3. The solutions to Hamiltons equations, viewed as paths in the extended phase
space, are the bi-characteristics of 1 .
Note that the upper left 2n 2n submatrix has full rank, so the kernel has dimension at
most one. The dimension is in fact one because = (x H, p H, 1) lies in the kernel.
Suppose that we start at some point (p0 , x0 , t0 ) in the extended phase space, and move
along a path so that the tangent vector of the path at any point (p, x, t) is parallel to the
eigenvector (x H(p, x), p H(p, x), 1) at that point. Then, by Definition B.13, our path
is a bi-characteristic of 1 in the extended phase space. Since we are always moving the
positive time direction, we can represent the resulting path by the vector functions p(t) and
x(t).
To compute, for example, p1 (t), we consider the map (p(t), x(t), t) 7 (p1 (t), t), which is
the projection of the path in extended phase space onto the space Rp1 Rt . We take the
ratio of the first and last (scalar) components of (x H, p H, 1), giving us
By taking ratios of other components with the last component, we have pi (t) =
xi H(p(t), x(t)) and xi (t) = pi H(p(t), x(t)). Viewed collectively, these 2n equations are
precisely Hamiltons equations (2.1).
Definition 2.4. A map g : Rnp Rnx RnP RnX is canonical if it preserves the differential
form 2 = dp dx = dpi dxi , that is, if g ( 2 ) = 2 .
P
Since (2.2) shows that the differential 1-form p dx P dX is closed, we know the form is
exact. That is, there exists a function S : Rnp Rnx R such that p dx P dX = dS.3 Then
(2.3) 1 = p dx + H dt = P dX + H dt + dS
Let K : RnP RnX R be given by K(P, X) = H(p(P, X), x(P, X)). Because of (2.3),
the bi-characteristics of 1 in the new coordinates (P, X) are given by
P(t) = X K(P(t), X(t))
(2.4)
X(t) = P K(P(t), X(t))
Since (2.1) and (2.4) both describe the bi-characteristics of 1 , they represent the same
path in the phase space, just with different coordinates. Thus, Hamiltons equations hold as
well for the new space, with the same Hamiltonian. We have just shown
Theorem 2.6. Canonical transformations preserve Hamiltons equations.
Note that both sides are differential forms on Rn Rn . We now allow t to vary. Let
S(p, x, t) = St (p, x). Then4
dS = t S dt + dSt
(2.6)
dX = t X dt + dXt
(p0 , x0 ) to (p, x). S is well defined because the differential form is closed and the space Rnp Rnx is simply
connected.
4Here, we emphasize a point on notation from Appendix A: X is not the same as X. The former means:
t
view X as a function Rnp Rnx Rt R and take the partial derivative with respect to t. The latter
means: let X : Rt Rn be a solution to Hamiltons equations (together with P : Rt Rn ), and take the
derivative.
AN OVERVIEW OF THE HAMILTON-JACOBI EQUATION 6
Theorem 2.7. If (p, x, t) 7 (P(p, x, t), X(p, x, t)) is a time-dependent canonical transfor-
mation, then the system in the new coordinates satisfies Hamiltons equation with K(P, X, t) =
H + P t X + t S.
Remark 2.8. The transformed Hamiltonian, K, is also time dependent. Thus, as pointed
out in Remark 2.2, K is not necessarily constant along a solution (P(t), X(t)).)
2.6. Type-2 generating functions. We take a brief detour to describe a class of canonical
transformations, using a tool called a generating function.
Lets start with any transformation. The transform gives us a function S(p, x, t). Observe
that d(P X) = P dX + X dP + (t P X + P t X) dt, so (2.7) can be written as
d(P X + S) = p dx + X dP + C dt
where we have collected all the dt terms above (since they will not be important). Let
u(P, x, t) = P X + S(p, x, t). That is, we describe our space with the new momenta P and
the old coordinates x, assuming this is possible. We have
du = p dx + X dP + C dt
Note that we have described a canonical transformation by a single function u. This func-
tion is an example of a generating function, that is, a function that generates a canonical
transformation.5
We can write the new Hamiltonian in terms of the old Hamiltonian and the generating
function:
Lemma 2.9. If (p, x, t) 7 (P(p, x, t), X(p, x, t)) is a time-dependent canonical transforma-
tion generated by u(P, x, t), then the new Hamiltonian K, the old Hamiltonian H, and the
5There are 4 types of generating functions, and u falls in the class of type-2 generating functions. The
generating functions used here have no relation to the generating functions used in combinatorics.
AN OVERVIEW OF THE HAMILTON-JACOBI EQUATION 7
so
t S = p u t P + t u P t X t P X
= X t P + t u P t X t P X
= t u P t X
P(t) = 0
X(t) = 0
Note that P has no role in the equation above, so we can assume that u has no explicit
dependence on P. Our equation reduces to
In this section, we work with n = 1 and take H(p, x) = 12 p2 . (Recall, from Example 2.1
that this H is the Hamiltonian for a free particle.)
AN OVERVIEW OF THE HAMILTON-JACOBI EQUATION 8
For given initial data g(x), we consider a classical solution to the PDE
t u + 12 (x u)2 = 0 on Rx (0, )t
(3.1)
u = g on Rx {0}t
(By classical, we simply mean that u is differentiable, so that the PDE can be satisfied
everywhere on Rx (0, )t .)
Proof. Suppose v(x, t) is a differentiable solution to (3.2). Consider a function x(s) with
x(0) = x0 . Then (x(s), s) Rx [0, )t traces out a path through the space, beginning at
(x0 , 0). We want to strategically choose the path x(s) so that v behaves nicely along the
path. To do this, we note that
d
v(x(s), s) = x v(x(s), s) x(s) + t v(x(s), s)
ds
If we require that our path x(s) satisfy the ODE x(s) = v(x(s), s), then the right hand
side above is zero, due to (3.2). In that case v is constant along the path (x(s), s). It follows
that x(s) = 0, so x is linear.
We have the initial data x(0) = x0 and x(0) = v(x(0), 0) = h(x0 ), so our path is x(s) =
x0 + s h(x0 ).
Remark 3.2. This is an application of the method of characteristics, which is used to solve
first-order nonlinear PDE. The general technique reduces the PDE into a system of ODEs.
The lines x = x0 + t h(x0 ) are called the characteristic lines of (3.2).
Example 3.3. Suppose we have initial data h(x) = 1. Then the characteristic lines are given
by x = x0 + t, and v has the value 1 along each line. (See Figure 3.1.) Thus, we see that
v(x, t) = 1 is the unique solution to Burgers equation with the given initial condition.
Example 3.4. Suppose h(x) = x. Then v has the value x0 along the line x = x0 + x0 t. (See
x
Figure 3.2.) The resulting function v(x, t) = 1+t is the unique solution to the PDE.
AN OVERVIEW OF THE HAMILTON-JACOBI EQUATION 9
3.2. Non-existence of a classical solution. In both Example 3.3 and Example 3.4, there
was exactly one unique differentiable solution. However, we can easily come up with initial
data for which that is not true.
Suppose we have initial data g(x) = |x| to the Hamilton-Jacobi equation (3.1).6 Then v
has initial data (
1, if x < 0
h(x) =
1, if x > 0
However, we immediately see a problem with our given initial data. Take, for example, the
characteristic lines x = 1 t and x = 1 + t. v must be constant along these two lines. This
is fine for all t < 1, but the two characteristics intersect at (x, t) = (0, 1). (This intersection
is called a shock.) Since the values of v along the two lines are different, there cannot be a
differentiable solution.
3.3. Falling back to a weak solution. We can still use the lemma to help us build a
plausible solution u to (3.1).
If we return to the two characteristic lines x = 1t and x = 1+t, we note that these lines
cause us no problems until we reach x = 0. This is, in fact, the case for all characteristic
lines: for a > 0, the pair of characteristic lines x = a t and x = a + t intersect at
(x, t) = (0, a). (See Figure 3.3.)
6We should not expect lack of regularity of the initial data, by itself, to cause the non-existence of a
differentiable solution. For example, let u be the solution to the heat equation ut (x, t) x u(x, t) = 0 for
initial data g(x). If g is bounded and continuous, then u becomes instantaneously smooth for all t > 0, even
if g is not smooth (or even differentiable).
AN OVERVIEW OF THE HAMILTON-JACOBI EQUATION 10
Thus, we can divide the space Rx [0, )t along x = 0 and consider the function
(
1, if x < 0
v(x, t) = h(x) =
1, if x > 0
The constant of integration is taken to be 21 t so that u does indeed solve (3.1) when
x 6= 0.
At this point, we may try to define a weak solution to (3.1) to be a function u(x, t) which
satisfies the PDE everywhere except on a single line. However, is this notion adequate?
Suppose at a shock, we choose to kill off the characteristic lines emanating from x > 0,
and to continue the ones from x < 0. (See Figure 3.4.)
Figure 3.4. Another way to resolve the intersection of the characteristic lines.
giving us
(
x 12 t, if x < t
(3.4) u(x, t) =
x 21 t, if x > t
This solves (3.1) except on the line x = t. Hence, it is also a weak solution, in the
sense described above. However, this solution is worse than (3.3) because this one is not
continuous.
For our given initial data g(x) = |x|, we may conclude, based on continuity, that (3.3) is
a better weak solution than (3.4), but in other situations, it may be difficult to determine
the best way to resolve the intersection of characteristic lines.
In the following sections, we will develop some theory to give a better notion of a weak
solution, and we will return to the particular issue here in subsection 5.3.
4. Control Theory
4.1. Introduction. We now move away from physics and towards optimal control theory.
Quite interestingly, we will start with the solution u(x, t) and find a PDE that it solves,
which turns out to have the form of the Hamilton-Jacobi equation. Quite remarkably, the
theory developed here can be used to solve the Hamilton-Jacobi equation for a certain class
of Hamiltonians, as we will see in subsection 5.3.
where f : Rn Rn is some fixed function. The goal is to solve for y : [0, t] Rn , a path
in the space Rny Rs .
We modify this ODE as follows: Let A be some compact subset of, say, Rm . We will call
a map : [0, t] A a control. Let A be the set of controls. We let f : Rn A Rn vary
based on the control, and consider the modified ODE
y(s) = f (y(s), (s)) on Ry (0, t)s
(4.1)
y(t) = x
In a way, the control steers the resulting path by varying the ODEs solution. We can
let y() (s) denote the solution to the ODE for a particular control ().
Example 4.1. Suppose that n = 1, that A = [1, 1], and that f (y, ) = . Then our ODE is
y(s) = (s), so the control (s) represents the velocity. Take the particular case x = 1 and
t = 1.
AN OVERVIEW OF THE HAMILTON-JACOBI EQUATION 12
Figure 4.1 shows the resulting paths y () (s) for some controls (). Note that there is no
control for which y(0) < 0 or y(0) > 2; at these starting points, we cannot run fast enough
to travel to y = 1 by time t = 1.
By introducing the control as a parameter of our ODE, we can ask the following question:
What choice of () will cause the resulting y() (s) to be optimal in some way? We will
associate each control to a cost Cx,t [()]:
Z t
()
r y() (s), (s) ds
Cx,t [()] = g y (0) +
0
Then u(x, t) answers the following question: If we start somewhere in Rny {0}s and
want to move to the point (x, t), what is the cheapest way to do so?
4.3. Dynamic programming. Let us suppose that in (4.2), the infimum is actually an
minimum, and that the control attains the minimum cost. In that case, we can write the
total cost as
Z th Z t
() ()
r y() (s), (s) ds
Cx,t [()] = g y (0) + r y (s), (s) ds +
0 th
The first two terms represent the cost of moving from s = 0 to (y() (t h), t h). The
remaining term represents the cost of moving onwards to (x, t). Employing the principle of
AN OVERVIEW OF THE HAMILTON-JACOBI EQUATION 13
dynamic programming, should be the control that attains the minimum cost when our
target is (y() (t h), t h). That is, if is the optimal control for time t, then it should be
the optimal control for all times earlier than t. Thus, we should have the relation
Z t
()
r y() (s), (s) ds
u(x, t) = u y (t h), t h +
th
The problem with this argument is that in (4.2), the infinum might not be attainable.
The precise statement is actually
For a proof, see [Eva10, Chapter 10].7 Once we understand how to interpret the statement
Equation 4.3, the proof becomes very straightforwardit is essentially the argument we have
presented above, but with some technical modifications.
The left hand side is a difference quotient. Taking the limit as h 0 gives us
x u(y(t), t) y(t) + t u(y(t), t) = r(y(t), (t))
Recall that the relation above is for a particular function (). However, because we have
taken the limit h 0, the relation only depends on the value of at t. Let a = (t). Once
again, because the infinmum might not be actually attained, we have arrived at a slightly
modified relation
(4.4) t u(x, t) + max {f (x, a) x u(x, t) r(x, a)} = 0
aA
7Note however, that our setup is slightly different from the setup in [Eva10]. We have chosen to present
the control theory problem backwards, so that our resulting PDE is an initial-value problem, instead of a
terminal-value problem (which is what [Eva10] obtains).
AN OVERVIEW OF THE HAMILTON-JACOBI EQUATION 14
Note that the PDE has the form of the Hamilton-Jacobi equation, with Hamiltonian
H(p, x) = maxaA {f (x, a) p r(x, a)}. This equation in control theory is known as the
Hamilton-Jacobi-Bellman equation.
We did not, however, provide a rigorous argument that u does solve (4.4). In fact, the
function u defined by (4.2) might not even be differentiable anywhere. In the next section,
we describe properties that u does satisfy to argue why we should indeed refer to u as the
solution to (4.4).
5.1. The method of vanishing viscosity. M. Crandall and P. Lions first introduced
the concept of viscosity solutions in their 1983 paper [CL83]. We illustrate the method of
vanishing viscosity by applying it to the Hamilton-Jacobi equation. The method works as
follows: Instead of seeking a solution u to
t u + H(x u, x) = 0 on Rnx (0, )t
(5.1)
u = g on Rnx {0}t
At a maximum, the Hessian matrix of ui v is negative-definite. This gives us the second
order condition
8C (Rn (0, ) denotes the set of smooth Rnx (0, )t R functions. We call this the class of smooth
x t)
test functions because we will be testing our viscosity solutions against this class of functions.
AN OVERVIEW OF THE HAMILTON-JACOBI EQUATION 15
The introduction of the viscosity term changes the PDE from being fully nonlinear to
being semilinear. Semilinear PDE are usually better behaved than fully nonlinear PDE. We
define these classes of PDE below.
Definition 5.2. A PDE of order m is semilinear if the derivatives of order m appear linearly
with known coefficients. (That is, the coefficients do not contain the unknown function or
any of its derivatives.) A PDE of order m is fully nonlinear if, when we view all lower order
derivatives as constants, the derivatives of order m still appear nonlinearly.
Example 5.3. The equation t u + 21 (x u)2 = xx u is a PDE of order m = 2. Because the
coefficient of xx u is constant, the equation is semilinear. On the other hand, the equation
t u + 21 (x u)2 = 0 is a fully nonlinear PDE of order m = 1, due to the nonlinear term
1
( u)2 .
2 x
As we see from the definition, u can be a viscosity solution even if it is not differentiable;
the tradeoff is that we must compare u to an entire class of test functions C (Rnx (0, )t ).
In subsection 5.1, we effectively proved the following property of viscosity solutions:
Theorem 5.5 (stability). Let H converge uniformly on compact subsets to H and suppose
u : Rnx Rt R is a classical solution to
t u + H (x u , x) = x u on Rnx (0, )t
u = g on Rnx {0}t
Suppose there is some sequence {i } that decreases to 0 such that ui converges uniformly
on compact subsets to u. Then u is a viscosity solution to t u + H(x u, x) = 0 with initial
data g(x).
Proof. Suppose v is a smooth test function and u v has a local extremum at (x0 , t0 ). Then
the first order conditions on u v are x (u v)(x0 , t0 ) = 0 and t (u v)(x0 , t0 ) = 0. Since
u satisfies the PDE (5.1) at (x0 , t0 ), it follows from the first order conditions that v does as
well.
Viscosity solutions allow us to conclude our discussion on control theory and the Hamilton-
Jacobi-Bellman equation in section 4.
Theorem 5.7. As in section 4 , let g : Rny R be the initial cost and r : Rny A R be
the running cost per unit time. Define u(x, t) by (4.2).
Furthermore, assume f (y, a), r(y, a) and g(y) are Lipschitz continuous in y and uniformly
bounded.
Then u is a viscosity solution to the Hamilton-Jacobi-Bellman equation
t u(x, t) + max {f (x, a) x u(x, t) r(x, a)} = 0
aA
The result (5.7) in itself is not necessarily a reason for us to consider u to be a legitimate
solution; when dealing different PDEs, many notions of weak fail to be satisfactory. We
would like one and only one solution, but sometimes weak solutions might not exist, or if
one does it might not be unique. Viscosity solutions, on the other hand, do in fact satisfy
existence and uniqueness properties!
AN OVERVIEW OF THE HAMILTON-JACOBI EQUATION 17
Theorem 5.9 (existence and uniqueness). Under certain regularity conditions on the Hamil-
tonian H, there exists a unique viscosity solution to the Hamilton-Jacobi equation (5.1).
Remark 5.10. The Hopf-Lax formula, which will be presented in subsection 5.3, gives a
solution to (5.1) for a narrow class of Hamiltonians. To prove the existence in more general
cases, we can use our argument in subsection 5.1, provided we can prove the existence of
solutions ui to (5.2); see [CL83, Section IV] for a complete discussion. For a proof of
uniqueness, see [Eva10, Chapter 10] or [CL83, Theorem V.2].
5.3. Hopf-Lax Formula for Hamiltonian Mechanics. We make the following assump-
tions about H:
H : Rnp R is a function of p only
H is convex.
lim|p| H(p)
|p|
=0
Under these assumptions, we can define the Lagrangian by L(v) = suppRn {p v H(p)}.
We can show that L will also be convex and it will also satisfy lim|v| L(v)
|v|
= 0.
Using properties of the Legendre transform stated in subsection 2.2, the Hamilton-Jacobi
equation takes the form
t u(x, t) + maxn {v x u(x, t) L(v)} = 0
vR
This has the form of the Hamilton-Jacobi-Bellman equation! Here we use v Rn in place
of a A. Then, we have f (x, v) = v and r(x, v) = L(v). If g(x) is our initial data, then
our optimization problem in this case is
Z t Z t
u(x, t) = min L(v(s)) ds + g(y(0)) = minn g(y0 ) + min L(y(s)) ds
v() 0 y0 R y() 0
where the inner minimum is taken over all paths y(s) such that y(0) = y0 and y(t) = x.
By the theory of Lagrangian mechanics, since our Lagrangian depends on v and not on x,
the minimum of that integral is attained by a straight line path.9 It follows that
xy
u(x, t) = minn tL + g(y)
yR t
where we have dropped the subscript 0 from y0 . This formula is the Hopf-Lax formula.
What this tells us is that the generating function we seek in subsection 2.7 is given by a
minimization problem. While the formula can be derived through other means, our path
makes the connection to optimization problems clear.
1 2 1 2
Example 5.11. Take H(p) = 2
p and g(x) = |x|. Then L(v) = 2
v and the Hopf-Lax
formula gives
(x y)2
1
u(x, t) = min |y| = |x| t
yR 2t 2
9The integral itself is called the action.
AN OVERVIEW OF THE HAMILTON-JACOBI EQUATION 18
which agrees with our first solution (3.3) in subsection 3.3. By consistency and uniqueness
of viscosity solutions, we know there are no differentiable solutions to the PDE (a fact we
had already established due to the crossing of characteristics).
Appendix A. Notation
A.1. Functions.
A.2. Differentiation.
The derivative of a function is always taken with respect to one of the typical
variables. We will use to denote the partial derivative with respect to one scalar
variable. We will use and to denote the gradient and Laplacian, respectively.
For example, let u : Rnx Rt R. Then:
(1) t u is a Rnx Rt R function given by
u
t u(x, t) = (x, t)
t
AN OVERVIEW OF THE HAMILTON-JACOBI EQUATION 19
For section 2 (and only section 2), we assume the reader is familiar with elementary
techniques and properties of differential forms. Nonetheless, we include this very quick review
of differential forms to highlight properties that will be important in our main discussion.
Let V be a n-dimensional vector space over R.
Remark B.2. Let x1 , . . . , xn be a basis for V . For i = 1, . . . , n, let dxi : V R be the 1-form
defined by
dxi : c1 x1 + + cn xn 7 ci
That is, dxi denotes projection onto coordinate xi . Observe that dx1 , . . . , dxn is a basis
for the space of 1-forms on V .10
Definition B.3. If 1 and 2 are 1-forms on V , then the wedge product 1 2 is a 2-form
on V , given by
1 () 1 ()
1 2 : (, ) 7
2 () 2 ()
Remark B.4. It follows from the determinant formula that 1 2 = 2 1 .
Remark B.5. The set {dxi dxj : 1 i < j n} is a basis for the space of 2-forms on V .
Remark B.6. Also from the determinant formula, (dx1 dx2 )(, ) is the signed area of the
projection of the parallelogram spanned by and onto the space spanned by x1 and x2 .
10Note that the projection dxi depends not only on the vector xi , but on the entire basis x1 , . . . , xn .
AN OVERVIEW OF THE HAMILTON-JACOBI EQUATION 20
Remark B.7. Consider the 2-form 2 = i<j aij dxi dxj . We can associate to 2 the n n
P
If i < j, then (A)ij = aij . That is, the upper-diagonal entries of A are given by the
values aij .
A is skew-symmetric. (Note that this property, when combined with the previous
one, completely determines A.)
dxn ()
Remark B.8. It is possible to extend the definition of the wedge product: If 1 is a k1 -
form and 2 is a k2 -form, then 1 2 will be a (k1 + k2 )-form. (We will not require this
extension.)
Definition B.9. A differential 1-form on V has the form = ni=1 fi dxi , where each fi is
P
a V R map. If we take a point v V and evaluate each fi at v, then we get the 1-form
P
fi (v) dxi .
n X n X fj
1
X fj fi
d = dxi dxj = dxi dxj
i=1 j=1
xi i<j
xi xj
Remark B.12. If n = 3 and 1 = f1 (x) dx1 + f2 (x) dx2 + f3 (x) dx3 , then d = g1 (x) dx2
dx3 + g2 (x) dx3 dx1 + g3 (x) dx1 dx2 , where (g1 , g2 , g3 ) = (f1 , f2 , f3 ). Since d 1 is a
2-form, we can associate to it a matrix, in particular:
0 g3 g2
g3 0 g1
g2 g1 0
Definition B.13. Suppose the matrix corresponding to d 1 has a unique eigenvector (x)
of eigenvalue 0. Then a bi-characteristic of 1 is a path through Rn such that the tangent
vector to the path at any point x is parallel to (x).
Remark B.14. The bi-characteristics of a differential 1-form are independent of the coordinate
system chosen.
AN OVERVIEW OF THE HAMILTON-JACOBI EQUATION 21
Acknowledgments
This paper is the authors junior paper for his fall 2012 term at Princeton University. Many
thanks is owed to Professor Peter Constantin for his guidance throughout the semester and
his valuable suggestions during the writing process. Professor Daniel Tataru for introduced
the author to the field of PDEs; for that, the author is very grateful. The author would also
like to thank his peers Minh-Tam Trinh, Laurent Cote, Evangelie Zachos, Feng Zhu, and
Rohan Ghanta for their help.
References
[Arn89] V.I. Arnold, Mathematical methods of classical mechanics, Graduate Texts in Mathematics,
Springer, 1989.
[CEL84] M.G. Crandall, L.C. Evans, and P.L. Lions, Some properties of viscosity solutions of Hamilton-
Jacobi equations, Trans. Amer. Math. Soc 282 (1984), no. 2.
[CIL92] M.G. Crandall, H. Ishii, and P.L. Lions, Users guide to viscosity solutions of second order partial
differential equations, Bull. Amer. Math. Soc 27 (1992), no. 1, 167.
[CL83] M.G. Crandall and P.L. Lions, Viscosity solutions of Hamilton-Jacobi equations, Trans. Amer.
Math. Soc 277 (1983), no. 1, 142.
[Eva10] L.C. Evans, Partial differential equations, Graduate Studies in Mathematics Series, Amer Mathe-
matical Society, 2010.
[LL76] L.D. Landau and E.M. Lifshitz, Mechanics, Course of Theoretical Physics, Pergamon Press, 1976.
Princeton University