Evans PDE
Evans PDE
Evans PDE
Disclaimer: These notes are based on the textbook Partial Differential Equations by Lawrence C. Evans.
Do not distribute without permission.
When using this method called ”separation of variables” the goal is to construct a solution u to a given
partial differential equation as a combination of functions of fewer variables. Or in other words we try to
guess that that u can be written as a sum or product of undetermined constituent functions then to plug
this guess into the PDE and choose the simpler functions to ensure that u is really a solution.
u(x, t) = v(t)w(x) (x ∈ U, t ≥ 0)
Proof: That is, we look for a solution with the variables x = (x1 , . . . , xn ) ∈ U to be ”separated” from the
variable t ∈ [0, T ]. The question then is will this work? to find out we compute
Hence
if and only if
v 0 (t) ∆w(x)
(3) v(t) = w(x)
for all x ∈ U and t > 0 such that w(x), v(t) 6= 0. Now we can observe that the LHS of (3) depends only on
t and the RHS depends on x. This is only possible if each is constant as below
v 0 (t) ∆w(x)
v(t) =µ= w(x) (t ≥ 0, x ∈ U )
4-1
4-2 Lecture 4.1: Separation of Variables
Then
(4) v 0 = µv
(5) ∆w = µw
We must then solve these equations for the unknowns w, v, and µ. We notice that if µ is known the solution
to (4) is v = deµt for an arbirary constant d. Consequently we need to only investigate (5). Thus we say
that λ is an eigenvalue of the operator −∆ on U which is subject to the zero boundary conditions provided
there exists a function w not identically equal to zero, solving
(
−∆w = λw in U
w=0 on ∂U
u = deλt w
Solves
(
ut − ∆u = 0 in U × (0, ∞)
(7)
u=0 on ∂U × [0, ∞)
With the intial condition u(·, 0) = dw) Thus u solves problem (1), provided g = dw More generally if
λ1 , . . . , λm are eigenvalues w1 , . . . , wm are corresponding eigenvalues and d1 , . . . , dm are cinstants the we
have that
Pm
u= k = 1dk e−λk t wk
Pm Pm
Solves (7) with the intial condition u(·, 0) = k = 1dk wk If we can find m, . . . , etc such that k = 1dk wk =
g and we are done. We can hope to generalize further by tryingλ1 , . . . of eigenvalues with corresponding
eigenfunctions w1 , . . . , so that
P∞
(9) k = 1dk wk = gin U
Will be the initial solution to (1). This is an atractive representation but depends in our being able to find
eigenvalues, eigebfunctions and constants that satisfy (9) and our verifying the series in (10) converges in
some appropriate sense as discussed in chapters 6 and 7. Some notes (6) is determined by the separation of
variables while (8) and (10) depend on the linearity of the heat equation.
Example 4.2 Let us look at another example by applying the seperation of variables to find a solution to
the porous medium equation
Lecture 4.1: Separation of Variables 4-3
where u ≥ and γ > 1 is a constant. The expession above is a nonlinear diffusion equation where the rate
of diffusion of some density u depends on u itself. This PDE describes flow in porous media, thin-film
lubrication, and a variety of other phenomena.
Proof: Like the last example we are seeking a solution of the form
u(x, t) = v(t)w(x) (x ∈ Rn , t ≥ 0)
For some constant µ and all x ∈ Rn , t ≥ 0 such that w(x), v(t) 6= 0 We solve to ODE for v and find that
1
v = ((1 − γ)µt + λ) 1−γ
for some constant λ which we will take to be positive. To discover w we must solve the PDE
(14)∆(wγ ) = µw
w = |x|α
2
(16)α = γ−1
1
u = ((1 − γ)µt + λ) 1−γ |x|α
solves the porous medium equation (11) with the parameters α, µ defined by (16) and (17).
λ
Remark 4.3 Since γ > 1 this solution blows up for x 6= 1 as t → t∗ for t∗ := (γ−1)µ In Physical terms a
huge amount of mass ”diffuses in from infinity” in finite time. 4.2.2 gives another better behaved solution of
the porous medium equation and 9.4.1 for more on a blow up phenomena for non-linear diffusion equations.
In the previous example separation of variables worked due to the homogeneity of the non linearity which is
compatible with functions u having the multiplicative form (12). In other circumstances it is better to look
for a solution in which the variables are seperated additively. we will look at an example of this next
if and only if
For some constant µ consequently if H(Dw) = µ for some µ ∈ R then u(x, t) = w(x) − µt + b will for any
constant b solve ut + H(Du) = 0 In particular if we choose w(x) = a · x for some a ∈ Rn and set µ = H(a)
we then discover than the solution u = a · x − H(a)t + b as we discovered in section 3.1
4-4
Lecture 4.1.2: Application: Turing Instability 4-1
.1.2
Disclaimer: These notes are based on the textbook Partial Differential Equations by Lawrence C. Evans.
Do not distribute without permission.
In the first example we discussed separation of variables and eigenfunction expansions as powerful tools in
both pure and applied mathematics. This section discusses an interesting application.
Example 4.5 Assume we are given a smooth vector field F = (f 1 , f 2 ) on R2 for which 0 is an equilibrium
f (0) = 0. We are interested in looking at the stability of the solutions. x = (x1 , x2 ) of the system of ODE
(19) ẋ = f (x)(t ≥ 0). With solution su = (u1 , u2 ) of a corresponding reaction-diffusion system of PDE
(
(20)ut − A∆u = f (u) in U × (0, ∞)
u=0 on ∂U × (0, ∞)
a1 0
In some bounded smooth region U ⊂ R2 The matrix A = Introduces the diffusion constants
0 a2
a1 , a2 ≥ 0
The linearization of (19) around the equilibrium solution X ≡ 0 is the linear system of ODE
(21) ẏ = Df (0)y (t ≥ 0)
for v = (v 1 , v 2 ). We solve (22) by the separation of variables and subsequent eigenfunction expansion method
as described earlier. Thus we have
P∞
(23) v(x, t) = j=1 sj (t)wj (x)
(
−∆wj = λj wj in U
wj = 0 on∂U
In chapter 6.5 we will learn more about such eigenvalues and eigenfunctions where in particular we will see
that λj > 0 (j = 1, /ldots) and that in L2 we can take {wj }∞ j=1 to be orthonormal. w w dx = δij (i, j =
U i j
1, . . .) Plugging in (22) to (23) we see that for j = 1, . . . we have that (24)s0j = Aj sj for the matrix (25)
Aj := Df (0) − λj A. Then the solution v ≡ 0 is stable if and only if each function sj decays to 0 as t → ∞.
This occurs provided the eigenvalues of the matrices Aj have negative real parts for j = 1, . . .. Consider
that 0 is asymtotically stable equilibrium for the system of ODE. Does it follow that 0 is an asymptotically
stable equillibrium for the system of PDE? the answer is in fact no. The diffusion terms in the PDE (20)
can in fact transform a stable point for (19) into an unstable point for (20) This effect is called a Turing
instability. We will investigate this phenomenon by introducing
1 some1explicit conditions
of Df (0) that force
fz1 (0) fz2 (0) α β
0 to be stable for the ODE (19). Let us define Df (0) := =:
fz21 (0) f 2 z2 (0) γ δ