Wronskian 19
Wronskian 19
Wronskian 19
1. The Wronskian
Consider a set of n continuous functions yi (x) [i = 1, 2, 3, . . . , n], each of which is
differentiable at least n times. Then if there exist a set of constants λi that are not all
zero such that
λi y1 (x) + λ2 y2 (x) + · · · + λn yn (x) = 0 , (1)
then we say that the set of functions {yi (x)} are linearly dependent. If the only solution
to eq. (1) is λi = 0 for all i, then the set of functions {yi (x)} are linearly independent.
The Wronskian matrix is defined as:
y1 y2 ··· yn
y1′ y2′ ··· yn′
′′ ′′
′′
y y · · · y
Φ[yi (x)] = 1 2 n , (2)
.. .. .. ..
. . . .
(n−1) (n−1) (n−1)
y1 y2 ··· yn
where
dyi d2 yi (n−1) d(n−1) yi
yi′ ≡ , yi′′ ≡ , · · · , y i ≡ .
dx dx2 dxn−1
The Wronskian is defined to be the determinant of the Wronskian matrix,
In light of eq. (8.5) on p. 133 of Boas, if {yi (x)} is a linearly dependent set of functions
then the Wronskian must vanish. However, the converse is not necessarily true, as one
can find cases in which the Wronskian vanishes without the functions being linearly
dependent. (For further details, see problem 3.8–16 on p. 136 of Boas.)
Nevertheless, if the yi (x) are solutions to an nth order linear differential equation
for values of x that lie in some open interval (e.g., x0 < x < x1 ), then the converse does
hold. That is, if the yi (x) are solutions of a homogeneous nth order linear differential
equation and the Wronskian of the yi (x) vanishes, then {yi (x)} is a linearly dependent
set of functions. Moreover, if the Wronskian does not vanish for some value of x, then
it is does not vanish for all values of x, in which case an arbitrary linear combination
of the yi (x) constitutes the most general solution to the nth order linear differential
equation. A proof of this statement is given in Appendix A.
1
2. Applications of the Wronskian in the treatment of a second order linear
differential equation
To simplify the discussion, we shall focus on the role of the Wronskian in the
treatment of a second order linear differential equation,1
y ′′ + a(x)y ′ + b(x)y = 0 . (4)
Suppose that y1 (x) and y2 (x) are linearly independent solutions of eq. (4). Then
the Wronskian is non-vanishing,
y1 y2
W = det ′ = y1 y2′ − y1′ y2 6= 0 . (5)
y1 y2′
Taking the derivative of the above equation,
dW d
= (y1 y2′ − y1′ y2 ) = y1 y2′′ − y1′′y2 ,
dx dx
since the terms proportional to y1′ y2′ exactly cancel. Using the fact that y1 and y2 are
solutions to eq. (4), we have
y1′′ + a(x)y1′ + b(x)y1 = 0 , (6)
y2′′ + a(x)y2′ + b(x)y2 = 0 . (7)
Next, we multiply eq. (7) by y1 and multiply eq. (6) by y2 , and subtract the resulting
equations. The end result is:
y1 y2′′ − y1′′ y2 + a(x) [y1 y2′ − y1′ y2 ] = 0 .
or equivalently,
dW
+ a(x)W = 0 , (8)
dx
This is a separable differential equation for the Wronskian W . It then follows that,
dW
= −a(x)dx .
W
Integrating both sides of the above equation yields,2
Z
ln |W (x)| = − a(x)dx + ln C ,
2
The Wronskian also appears in the following application. Suppose that one of the
two solutions of eq. (4), denoted by y1 (x) is known. We wish to determine a second
linearly independent solution of eq. (4), which we denote by y2 (x). The following
equation is an algebraic identity,
y1 y2′ − y2 y1′
d y2 W
= 2
= 2,
dx y1 y1 y1
after using the definition of the Wronskian W given in eq. (5). Integrating with respect
to x yields
y2 W (x) dx
Z
= .
y1 [y1 (x)]2
Hence, it follows that3
W (x) dx
Z
y2 (x) = y1 (x) . (10)
[y1 (x)]2
Note that an indefinite integral always includes an arbitrary additive constant of inte-
gration. Thus, we could have written:
Z
W (x) dx ′
y2 (x) = y1 (x) +C ,
[y1 (x)]2
where C ′ is an arbitrary constant. Of course, since y1 (x) is a solution to eq. (4), then if
y2 (x) is a solution, then so is y2 (x) + C ′ y1 (x) for any number C ′ . Thus, we are free to
choose any convenient value of C ′ in defining the second linearly independent solution
of eq. (4). The choice of C ′ = 0 is the most common, in which case the second linearly
independent solution is given by eq. (10).
Here is a simple application of eq. (10). Consider the differential equation,
y ′′ − 2ry ′ + r 2 y = 0 . (11)
The auxiliary equation has a double root given by r. This means that y1 (x) = erx is
one solution of eq. (11). But, what is the second linearly independent solution? To use
eq. (10), we need the Wronskian, which can be obtained from Abel’s formula [eq. (9)]
by identifying a(x) = −2r. We will omit the overall factor of C since this factor simply
contributes to the overall normalization of the solution that we are seeking. Hence,
Z
W (x) = exp 2r dx = e2rx . (12)
Employing eq. (10), and noting that W (x)/[y1 (x)]2 = 1, we end up with,
Z
rx
y2 (x) = e dx = xerx .
We conclude that the most general solution to eq. (11) is given by an arbitrary linear
combination of y1 (x) and y2 (x). That is,
y(x) = (A + Bx)erx ,
where A and B are arbitrary constants.
3
A second derivation of eq. (10) is given in Appendix C. This latter derivation is useful as it can
be easily generalized to the case of an nth order linear differential equation.
3
The Wronskian also appears in the expression for the particular solution of an
inhomogeneous linear differential equations. For example, consider
y ′′ + a(x)y ′ + b(x)y = f (x) , (13)
and assume that the two linearly independent solutions to the homogeneous equation
[eq. (4)], denoted by y1 (x) and y2 (x), are known. The most general solution to the
homogeneous equation is given by
yh (x) = c1 y1 (x) + c2 y2 (x) ,
where c1 and c2 are arbitrary constants. Then the general solution to eq. (13) is given
by
y(x) = yp (x) + yh (x) ,
where yp (x), called the particular solution, is determined by the following formula,
y2 (x)f (x) y1 (x)f (x)
Z Z
yp (x) = −y1 (x) dx + y2 (x) dx . (14)
W (x) W (x)
One can derive eq. (14) by employing the technique of variation of parameters.4
Namely, one writes
yp (x) = v1 (x)y1 (x) + v2 (x)y2 (x) , (15)
subject to the following condition (which is chosen entirely for convenience),
v1′ y1 + v2′ y2 = 0 . (16)
With this choice, it follows that
yp′ = v1 y1′ + v2 y2′ , (17)
yp′′ = v1′ y1′ + v2′ y2′ + v1 y1′′ + v2 y2′′ . (18)
Plugging eqs. (15), (17) and (18) into eq. (13), and using the fact that y1 and y2 satisfy
the homogeneous equation [eq. (4)] one obtains,
v1′ y1′ + v2′ y2′ = f (x) . (19)
We now have two equations, eqs. (16) and (19), which constitute two algebraic equa-
tions for v1′ and v2′ , which we can write in matrix form,
′
y1 y2 v1 0
= .
y1′ y2′ v2′ f (x)
Note the appearance of the Wronskian matrix above. We can solve this matrix equa-
tion using Cramer’s rule. Since W (x) is the determinant of the Wronskian matrix
[cf. eq. (5)], it immediately follows that,
y2 (x)f (x) y1 (x)f (x)
v1′ = − , v2′ = .
W (x) W (x)
We now integrate to get v1 and v2 and plug back into eq. (15) to obtain eq. (14). The
derivation is now complete.
4
Boas employs an alternate method to obtain eq. (14) that makes use of Green functions discussed
in Section 12 of Chapter 8 [cf. eq. (12.21) on p. 464]; this technique will not be covered in this course.
4
3. The particular solution of an inhomogeneous second order linear differ-
ential equation with constant coefficients
As an example, consider the inhomogeneous second order linear differential equation
with constant coefficients,
ay ′′ + by ′ + cy = f (x) . (20)
where a, b, and c are real constants (a 6= 0) and f (x) is a given real function. In order
to find the solution to eq. (20), one first obtains the solution of the corresponding
homogeneous equation,
ay ′ + by + c = 0 . (21)
Following Section 5 of Chapter 8 on pp. 408-414 of Boas, one first finds the roots of
the corresponding auxiliary equation, ar 2 + br + c = 0. Denoting the two roots of the
auxiliary equation by r1 and r2 , then one can immediately write down the two linearly
independent solutions to eq. (21),
r1 x r2 x
Ae + Be ,
for real roots, r1 6= r2 ,
yh (x) = (A + Bx)e , rx
for degenerate (real) roots, r ≡ r1 = r2 ,
αx
e A sin(βx) + B cos(βx) , for complex roots, r1 ≡ α + iβ and r2 = (r1 )∗ ,
(22)
where A and B are arbitrary constants.
In order to find the most general solution to eq. (20), one must discover a particular
solution to eq. (20), denoted by yp (x). Then, the most general solution to eq. (20) is
given by,
y(x) = yp (x) + yh (x) . (23)
In Section 6 of Chapter 8 on pp. 417-422 of Boas, a method is provided for finding
yp (x) in cases where the function f (x) in eq. (20) is of the form ecx Pn (x), where c is
some number and Pn is a polynomial of degree n. In the general case, one can employ
eq. (14) to obtain a formal solution for yp (x) no matter what function f (x) appears on
the right hand side of eq. (20).
To employ eq. (14), we must first compute the Wronskian of y1 (x) and y2 (x). First,
consider the case of nondegenerate real roots, where y1 (x) = er1 x and y2 (x) = er2 x .
Then eq. (5) yields,
W (x) = (r2 − r1 )e(r1 +r2 )x .
Next we must divide eq. (20) by a in order to match the form of eq. (4), which means
that f (x) is replaced by f (x)/a. Then, eq. (14) yields,
1
Z Z
r1 x −r1 x r2 x −r2 x
yp (x) = e e f (x) dx − e e f (x) dx . (24)
a(r1 − r2 )
Second, in the case of degenerate roots, y1 (x) = erx and y2 (x) = xerx . Using eq. (5),
the Wronskian is given by [cf. eq. (12)],
W (x) = e2rx .
5
In this case, eq. (14) yields,
1
Z Z
rx −rx rx −rx
yp (x) = xe e f (x) dx − e xe f (x) dx . (25)
a
Finally, in the case of complex roots, y1 (x) = eαx sin(βx) and f2 (x) = eαx cos(βx).
Then, eq. (5) yields,
W (x) = −βe2αx .
In this case, eq. (14) yields,
βeαx
Z Z
−αx −αx
yp (x) = sin(βx) e cos(βx)f (x) − cos(βx) e sin(βx)f (x) . (26)
a
In summary, the solution to eq. (20) is given by eq. (23), where yh (x) is given by
eq. (22) in the three cases of real nondegenerate, real degenerate and complex roots of
the auxiliary equation, and yp (x) is given in the three corresponding cases by eqs. (24),
(25) and (26), respectively.
= −x 21 x2 ln |x| − 41 x2 + 21 x3 ln |x| = 14 x3 .
7
where the matrix A(x) is given by,
0 1 0 0 ··· 0
0 0 1 0 ··· 0
0 0 0 1 ··· 0
.. .. .. .. .. ..
A(x) = . . . . . . . (32)
0 0 0 0 ··· 1
an (x) an−1 (x) an−2 (x) an−3 (x) a1 (x)
− − − − ··· −
a0 (x) a0 (x) a0 (x) a0 (x) a0 (x)
It immediately follows that if the yi (x) are linearly independent solutions to eq. (31),
then the Wronskian matrix Φ, defined in eq. (2), satisfies the first order matrix differ-
ential equation,
dΦ
= A(x)Φ . (33)
dx
Using eq. (39) of Appendix B, it follows that
d −1 dΦ
= det Φ Tr Φ−1 A(x)Φ = det Φ Tr A(x) ,
det Φ = det Φ Tr Φ
dx dx
after employing eq. (33) and the cyclicity property of the trace (i.e. the trace is un-
changed by cyclically permuting the matrices inside the trace). Hence, in terms of the
Wronskian, W ≡ det Φ, defined in eq. (3),
dW
= W Tr A(x) . (34)
dx
This is a separable first order differential equation for W that is easily integrated,
Z x
W (x) = W (x0 ) exp Tr A(t)dt .
x0
Using eq. (32), it follows that Tr A(t) = −a1 (t)/a0 (t). Hence, we arrive at Abel’s
formula, Z x
a1 (t)
W (x) = W (x0 ) exp − dt . (35)
x0 a0 (t)
Note that if W (x0 ) 6= 0, then the result for W (x) is strictly positive or strictly negative
depending on the sign of W (x0 ). This confirms our assertion that the Wronskian either
vanishes for all values of x or it is never equal to zero.
Of course, eq. (35) is equivalent to the version of Abel’s formula obtained in eq. (9)
in the case of the second order linear differential equation given by eq. (4).
Reference:
Daniel Zwillinger, Handbook of Differential Equations, 3rd Edition (Academic Press,
San Diego, CA, 1998).
8
APPENDIX B: Derivative of the determinant of a matrix
Recall that for any matrix A, the determinant can be computed by the cofactor
expansion. The adjugate of A, denoted by adj A is equal to the transpose of the matrix
of cofactors. In particular,
X
det A = aij (adj A)ji , for any fixed i , (36)
j
where the aij are elements of the matrix A and (adj A)ji = (−1)i+j Mij where the minor
Mij is the determinant of the matrix obtained by deleting the ith row and jth column
of A.
Suppose that the elements aij depend on a variable x. Then, by the chain rule,
d X ∂ det A daij
det A = . (37)
dx i,j
∂aij dx
Using eq. (36), and noting that (adj A)ji does not depend on aij (since the ith row and
jth column are removed before computing the minor determinant),
∂ det A
= (adj A)ji .
∂aij
Hence, eq. (37) yields Jacobi’s formula: 5
d X daij dA
det A = (adj A)ji = Tr (adj A) . (38)
dx i,j
dx dx
Reference:
M.A. Goldberg, The derivative of a determinant, The American Mathematical Monthly,
Vol. 79, No. 10 (Dec. 1972) pp. 1124–1126.
5
P
Recall that if A = [aij ] and B = [bij ], then the ij matrix element of AB are given by k aik bkj .
The trace of AB is equal to the sum of its diagonal elements, or equivalently
X
Tr(AB) = ajk bkj .
jk
6
Note that Tr (cB) = c Tr B for any number c and matrix B. In deriving eq. (39), c = det A.
9
APPENDIX C: Another derivation of eq. (10)
Given a second order linear differential equation
y ′′ + a(x)y ′ + b(x)y = 0 , (40)
with a known solution y1 (x), then one can derive a second linearly independent solution
y2 (x) by the method of variations of parameters.7 Indeed, this is the method employed
in Section 7 of Chapter 8 on p. 434 of Boas [corresponding to Case (e), which Boas
calls reduction of order ].
In this context, the idea of this method is to define a new variable v,
Z
y2 (x) = y1 (x)v(x) = y1 (x) w(x)dx , (41)
10