Galerkin-Wavelet Methods For Two-Point Boundary Value Problems
Galerkin-Wavelet Methods For Two-Point Boundary Value Problems
Galerkin-Wavelet Methods For Two-Point Boundary Value Problems
1.5
0.5
-0.5
0 1 2 3
Figure 1. (x) for p = 2.
and
X
J X
lim ku ?
J !1
jk jk k = 0; for all u 2 L2 (R);
j =?1 k
where
(2:10) jk = (u; jk ):
(2) For j 0, the rst p moments of jk equal to zero:
Z1
(2:11) xm jk (x) dx = 0; 0 m p ? 1:
?1
We shall now give a brief illustration for the assertions in the above proposition. For detailed
proofs, we refer to [7, 23]. R 1 dx = 1. By (2:4), (2:6) and induction we get ((x); (x ?
By (2:2) and (2:5), we have ?1
k)) = 0k , which together with (2:7) show that ((x); ~(x ? k)) = 0. Then an application of (2:4)
and (2:7) yields ( ~(x); ~(x ? k)) = 0k . Thus (2:9) is proved for fj; `g f?1; 0g. The general case
with j = ` follows from a proper change of variables of x. Finally, by (2:5) and induction we can
prove generally with j < `. P
P To see (2:11), let ^(! ) be the Fourier transform of ~(x), P (! ) = 21 ck eik! and Q(! ) =
1 (?1)k c1?k eik! . Since ^(0) = 1, we have formally
2
^(! ) = Q( ! )
Y
1
2 P ( 2!n ):
n=2
Galerkin-wavelets methods 5
-1
0 0.5 1
Figure 2. (x) for p = 1.
2
1.5
0.5
-0.5
-1
-1.5
0 0.5 1 1.5 2 2.5 3
Figure 3. (x) for p = 2.
Note that P (0) = 1 and by (2:3) Q(m) (0) = 0. Thus ^(m) (0) = 0, for 0 m p ? 1, and (2:11) is
proved for j = 0 and k = 1 ? p. The general case follows from a proper change of variables.
It can be shown that the polynomials 1, x, : : :, xp?1 are in the subspace spanned by f ?1;k g.
6 J.-C. Xu and W.-C. Shann
For example, by (2:3), (2:6) and induction, one can show that
X
(2:12) (x ? k) = 1:
k
Given any integer J 0, we dene
VJ = spanf jk j ?1 j < J; k 2 Zg:
Obviously VJ VJ +1 . By Proposition 2.1 [1 0 VJ is dense in L (R). It is worth noting that there
2
is another interesting orthonormal basis for VJ in addition to f jk g. To see this, let
p
jk (x) = 2j (2j x ? k)
and
V~J = spanfJk j k 2 Zg:
By (2:7), jk 2 V~j +1 . But (2:5) implies V~j V~j +1 . Thus we conclude VJ V~J . Similarly
the reverse inclusion holds. Therefore VJ = V~J , for all J 0. Since ((x); (x ? k)) = 0k ,
(j` ; jk ) = `k . Consequently fJk g is another set of orthonormal basis of VJ . The sequence
pV~?1 V~0 V~1 denes the so called multiresolution analysis of L2(R) [20, 23]; while
spanf 2j (2j x ? k) j k 2 Zg is the orthogonal complement of V~j in V~j +1 .
For convenience of exposition, we shall use the term \frame" for a \basis" of a vector space in
a weak sense. See [9, 16].
Definition 2.2. Let fn g1 n=1 be a subset Pof a Banach space (X; k kX ). Let
spanfn j 1 n < 1g be the set of all elements n n (n 2 R) which converge (strongly)
in X . Then fn g is said to be a frame of X if spanfn g = X .
Note that a frame is not necessarily an algebraic basis; the linear independence is not required.
The main properties of wavelets used in this paper are summarized in the following proposition.
In the forthcoming sections, we shall be concerned with the space L2 (
), where
is a bounded
open interval. It is clear that jk j
form a frame for L2 (
). In fact, those with supp jk \
6= ;
will be sucient for a frame. That is, f jk j j ?1; k 2 Ij g, where
(2:13) Ij = fk 2 Zj 1 ? R k 2|^R ? 1g; with |^ = maxf0; j g:
Correspondingly, we dene
VJ (
) = spanf jk j
j ?1 j < J; k 2 Ij g; and
(2:14)
V~J (
) = spanfJk j
j k 2 IJ g:
It is easy to see that V~J (
) = VJ (
). Also, Vj (
) Vj +1(
) and [1 2
0 VJ (
) is dense in L (
).
3. Anti-derivatives of wavelets and their approximation properties. In this section,
the anti-derivatives of wavelets will be used to construct frames for H01 (a; b), H?1 (a; b) and H 1 (a; b)
respectively. Approximation properties of these anti-derivatives will be discussed.
Given any f 2 L1 (a; b), we shall use the notation f to denote its mean value, i.e.
Zb
f = 1 b?a f dx:
a
Galerkin-wavelets methods 7
We begin our discussion with general functions.
Lemma 3.1. Let fn (x)g1 2
n=1 be a frame of L (a; b). Then
Zx
(3:1) n (x) = n ds ? (x ? a)n j a x b H01 (a; b);
a
Zx
(3:2) n (x) = n ds j a x b H?1 (a; b);
a
and, for any 2 [?1; a),
Zx
(3:3) n (x) = n ds j a x b H 1 (a; b)
are frames of H01 (a; b), H?1 (a; b) and H 1 (a; b) respectively.
Proof. We shall only present the proof for H01 (a; b) since the proofs for other cases are similar.
Let
R b w(xn) be dened by (3:1), clearly n 2 H01(a; b). Given any v 2 H01 (a; b), let w = v 0 . Note that
a dx = 0.
Since w 2 L2 (a; b), there are numbers fn g1 n=1 such that
Zb
lim
N !1 a
(w ? wN )2 dx = 0;
P P
where wN (x) = Nn=1 n n . Since wN = Nn=1 n n , we have
1 Z b
jwN j = b ? a wN (x) dx
1 Zab
=
b?a a [wN (x) ? w(x)] dx
p 1 kw ? wN k0:
b?a
Hence limN !1 wN = 0 and
XN p
kv ? n (n (x) ? n)k0 kv0 ? wN k0 + b ? a jwN j ! 0
0
n=1
P
as N ! 1. Thus if we take vN = Nn=1 n n , then vN 2 H01 (a; b) and
lim jv ? vN jH 1 (a;b) = 0:
N !1
So if we dene
Zx Zx
(3:6) (x) = (s) ds; (x) = (s) ds; for 0 x R:
?1 ?1
R x p
Then for j 0, ?1 jk (s) ds = (1= 2j ) (2j x ? k). Hence we have a frame for H 1 (
) consisting
of anti-derivatives of wavelets with supp jk \ (0; R) 6= ;, as seen in the next lemma.
Lemma 3.3. f1; jk j j ?1; k 2 Ij g is a frame of H 1 (
) where
( (x ? k); for j = ?1,
(3:7) jk (x) = p1 j k 2 Ij ; 0 x R:
2j (2 x ? k); for j 0,
Note that 2 C 0 for p = 1 and 2 C 1 for p = 2. The support of (x) is also [0; R]. But
(x) = 1 for x 2 [R; 1). Pictures of and for p = 2 are shown in Figure 4.
Thus, from the anti-derivatives of wavelets, we have constructed a frame for each of the spaces
H0 (
), H?1 (
) and H 1(
). We shall now discuss the approximation properties of these frames.
1
Lemma 3.4. For j 0 and p 1, let f 2 H p and jk be given by (2:10). Then
jjk j . 2?js jf js;Sjk ; 0 s p;
where Sjk = supp jk .
Galerkin-wavelets methods 9
1.5
0.5
-0.5
0 1 2 3
Figure 4. - - - (x); (x).
Proof. By (2:10) and (2:11), we have, for any polynomial q (x) of degree s ? 1,
Z1 Z
jk = (f ? q ) jk dx = (f ? q ) jk dx
?1 Sjk
kf ? qk0;Sjk k jk k = kf ? qk0;Sjk :
Since jSjk j = 2?j R, a standard Bramble-Hilbert scaling argument shows that
jjk j infq kf ? qk0;Sjk . 2?js jf js;Sjk :
A direct consequence of Lemma 3.4 is
X
J X
ku ? jk jk k . 2?Js jujs; 0 s p:
j =?1 k
But this estimate will not be directly used in this paper.
Since [1 2
0 VJ (
) is dense in L (
) and each VJ (
) is nite dimensional, it is natural to construct
nite dimensional subspaces of H01 (
), etc., by the frame of VJ (
). The following theorems concern
of the distance between these subspaces and H01 (
), etc.
Theorem 3.1. Given J 0, dene
n o
(3:8) SJ = span jk j ?1 j < J; k 2 Ij :
10 J.-C. Xu and W.-C. Shann
Then SJ is a nite dimensional subspace of H01 (
) (resp. H?1 (
)) if jk are chosen from (3:4) (resp.
(3:5)). For any v 2 H01(
) \ H s+1 (
) (resp. v 2 H?1 (
) \ H s+1 (
)), we have
(3:9) inf jv ? j1;
. hs jv js+1;
; 0 s p;
2S J
where h = 2?J .
Proof. Without loss of generality, we will only present the proof for the case that s = p and
that jk are given by (3:4). The proof for the other cases are similar.
Let v 2 H01 (
) \ H p+1(
). By the Sobolev extension theorem, there is a v~ 2 H p+1(R) such
that v~(x) = v (x) for x 2
and
kv~kp+1 . kvkp+1;
:
By Lemma 3.1 and (2:10),
X̀ X
lim jv ?
`!1
jk jk j1;
= 0
j =?1 k2Ij
P P
where jk = (~v 0 ; jk ). Let ej = k2Ij jk jk and = jJ=??1 1 ej . Then 2 SJ and
X
jv ? j1;
. ke0j k0;
:
j J
For each j , let Ij0 Ij consist of the indices k such that supp jk [0; R]. That is, 0 k
(2|^ ? 1)R. Let Ij00 = Ij n Ij0 . Since j 0, we have
ZR XX Z
je0j j2 dx = jk j` ( jk ? jk )( j` ? j` ) dx
0 k2Ij `2Ij
X 2 X Z
= jk + 2 jk j` jk ( j` ? j` ) dx
k2Ij0 k2Ij0
`2Ij00
X Z
+ jk j` ( jk ? jk )( j` ? j` ) dx:
k;`2Ij00
2
and SJ = spanf1; Jk j k 2 IJ g.
12 J.-C. Xu and W.-C. Shann
4. Applications to two-point boundary value problems. In this section, we apply the
wavelets of order p to numerical solutions of a two-point boundary value problem*
(4:1) ?(q(x)u0)0 + r(x)u = f (x); for x 2 (0; R);
with either one of the boundary conditions:
8 u(0) = u(R) = 0; (Dirichlet condition)
><
>: u(0) = u0 (R) = 0; (mixed condition)
u0 (0) = u0 (R) = 0: (Neumann condition)
We also let
= (0; R), where R = 2p ? 1. We assume that f 2 L2 (
), the coecients q and r are
smooth in [0; R] with
(4:2) 0 < q q (x) q and 0 r(x) r:
We now dene Galerkin-wavelets methods for solving (4:1). According to the boundary condi-
tion, we construct a nite dimensional subspace SJ of the solution space as dened in Theorem 3.1
and 3.2. Then we solve numerically the Galerkin projection of the solution in SJ . We rst give a
general account of the method and the error estimates in 4.1, then construct two bases for each SJ
in 4.2. Some computational aspects are considered in 4.3 and 4.4. Finally, we discuss connections
with hierarchical bases in the last subsection.
4.1. Error estimates. In this subsection, we shall only consider the Dirichlet boundary value
problem. Thus SJ , J 0, is the subspace of H01 (
) with frames jk from (3:4) or Jk from (3:11).
The other cases are similar with an appropriate choice of jk or Jk . The variational form of (4:1)
is
(4:3) a(u; v) = (f; v); for all v 2 H01 (
);
where a(; ) is the bilinear form dened by
ZR
(4:4) a(u; v) = qu0 v 0 + ruv dx:
0
Clearly a(; ) is continuous and coercive on H01 (
). It is well-known that, by Lax-Milgram lemma
(cf. [3]), there is a unique weak solution u 2 H01 (
) for (4:3). Let uJ be the Galerkin projection
of u on SJ dened by
(4:5) a(uJ ; ) = (f; ); for all 2 SJ :
It is also clear that (4:5) has a unique solution such that
ku ? uJ k1;
. inf
2S
ku ? k1;
:
J
By Theorem 3.1 and a standard duality argument (cf. [3]), the error estimates for both H 1 and
L2 norms can be obtained.
Theorem 4.1. Let u and uJ be the solutions of (4:3) and (4:5) respectively. Then for any
u 2 H01(
) \ H s+1 (
),
(4:6) ku ? uJ k0;
+ hku ? uJ k1;
. hs+1 kuks+1;
; 0 s p;
where h = 2?J .
* Without loss of generality, a special interval (0; R) is used in this model. More general interval
(a; b) can be reduced to (0; R) by a simple change of variable.
Galerkin-wavelets methods 13
4.2. Basis functions from frames. It is well-known that the approximate solution uJ can
be obtained by solving a linear algebraic system by means of a basis of SJ . But the frames of SJ
discussed earlier are not bases. In order to obtain a basis from a frame, we need to exclude some
linearly dependent functions. We rst study the bases for VJ (
).
Proposition 4.2. For any J 0, dim VJ (
) = 2J R + R ? 1. Dene the index set Dj such
that 8
< 1 ? R k R ? 1; if j = ?1;
(4:7) k 2 Dj () :
p ? R k 2j R ? p; if j 0.
Then
(4:8) fJk j 1 ? R k 2J R ? 1g
and
(4:9) f jk j ?1 j < J; k 2 Dj g
are bases of VJ (
).
Proof. For any J 0, the functions in (4:8) form a frame of VJ (
), as dened in (2:14). To
show they actually form a basis, it suces to show that they are linearly independent. Now suppose
2JX
R?1
= k Jk = 0 in
:
k=1?R
Note that for any 0 m 2J R ? R, the support of Jm lies entirely in [0; R], thus (Jm ; J` ) = m` ,
for all `. Thus Jm = 0. Observe that the value of (x) for R ? 2 2j x R ? 1 is solely determined
by ?1 J;?1 . But J;?1 6= 0 in that interval, so J;?1 = 0. The same argument shows that k = 0
for all 1 ? R k 2J R ? 1. Consequently the functions in (4:8) form a basis of VJ (
). Using this
basis, we count that the dimension of VJ (
) is 2J R + R ? 1.
Now we show by induction that the wavelets in (4:9) are also basis functions of VJ (
). For
J = 0, the functions in (4:9) are precisely those in (4:8), so they form a basis for V0(
). Suppose
that jk , for 0 j J ? 1 and k 2 Dj , form a basis of VJ (
). We shall now prove that jk , for
0 j J and k 2 Dj , form a basis of VJ +1 (
). To see this, we rst prove that they are linearly
independent. Assume
X
J X
= jk jk = 0 in
:
j =1 k2Dj
Then there are uniquely determined numbers k such that
2JX
R?1 X
= k Jk + Jk Jk in
:
k=1?R k2DJ
For the same reason as we put in the rst paragraph, we have Jk = 0 for all 0 k 2J R ? R.
By (2:7),
(2J x ? k) = ?c0 (2J +1 x ? (R + 2k)) + + cR (2J +1 x ? 2k):
Hence for any p ? R k < 0 and 2J R ? R < k 2J R ? p, we have 1 R + 2k < R and
2J +1 R ? 2R 2k 2J +1 R ? R ? 1 respectively. That means either (2J +1 x ? (R + 2k)) or
(2J +1 x ? 2k) has the support in [0; R]. The orthogonality of (x) and (2x) in L2 (R) implies that
either Jk c0 = 0 or Jk cR = 0. Since c0 6= 0 and cR 6= 0, we conclude Jk = 0. Thus Jk = 0 for
all k 2 DJ . By the induction hypothesis, jk = 0 for all 0 j J and k 2 Dj .
Therefore we have shown that the set f jk j ?1 j J; k 2 Dj g is linearly independent
in VJ +1 (
). Counting the cardinality, we see it is a basis for VJ +1 (
). The general result follows
14 J.-C. Xu and W.-C. Shann
by induction.
If (4:1) is imposed with the Neumann boundary condition, and SJ H 1 (
) is the space dened
in Theorem 3.2, then by the linear independence of (4:8) and (4:9), it is easy to see that
(4:10) f1; jk j ?1 j < J; k 2 Dj g; where jk is dened in (3:7)
and
(4:11) f1; Jk j k 2 IJ g; where jk is dened in (3:13)
are bases of SJ .
The bases for SJ H?1 (
) are also easy to construct if (4:1) is imposed with the mixed boundary
condition. But there is another twist for the subspace SJ H01 (
), because of the extra term x jk .
Let us apply (3:4) to the functions in (4:9), with j = ?1. By (2:12), we have
RX
?1 RX
?1 Z x ZR
?1;k = (s ? k) ds ? Rx (x ? k) dx
k=1?R 0 0
(4:12)
Zk=1x?X
R
x Z RX
= (s ? k) ds ? R (x ? k) dx
0 0
k k
= x ? Rx R = 0:
Thus ?1;k , for k 2 D?1 , are linearly dependent. The reason is that 1 can be spanned by (x ? k).
Proposition 4.3. Let Dj be the index set dened in (4:7). Let D~ j = Dj for j 0 and
~
D?1 = D?1 n f1 ? Rg. Then
(4:13) f jk j ?1 j < J; k 2 D~ j g; where jk is dened in (3:4)
and
(4:14) fJk j 2 ? R k 2J R ? 1g; where jk is dened in (3:11)
are bases of SJ , where SJ is the subspace of H01 (
) denedPin Theorem 3.1.
Proof. Let jk be the functions in (4:13). Suppose jk jk jk = 0, which can be written as
Zx
(4:15) ds = x in
;
0
where JX
?1 X
= jk jk :
j =?1 k2D~ j
Obviously (4:15) implies that = is a constant function in
. By (2:12), we have
X JX
?1 X
? ?1;1?R + (?1;k ? ) ?1;k + jk jk = 0:
k2D~ ?1 j =0 k2Dj
Galerkin-wavelets methods 15
0.4
0.3
0.2
0.1
-0.1
-0.2
Figure 5. 1;?1 ; 1k , 0 k 3; - - 14 .
Since jk in (4:9) are linearly independent, so = 0 and jk = 0. Hence the functions in (4:13)
are linearly independent.
We have already seen in (4:12) that ?1;1?R is a linear combination of ?1;k , for k 2 D~ ?1 .
But, as dened in Theorem 3.1, SJ is spanned by jk for ?1 j < J and k 2 Dj . Therefore, the
wavelets in (4:13) can span SJ , and thus form a basis.
Similarly we can show that the set of functions in (4:14) is linearly independent. Since its
cardinality is equal to the dimension of SJ , it is a basis.
Pictures of 1k for k 2 D~ 1 are shown in Figure 5.
4.3. Solution of linear algebraic systems. Suppose that (4:1)Pis imposed with Dirichlet
boundary condition and the basis of SJ is given by (4:13). Let uJ = ujk jk be the Galerkin
projection of u in SJ , the coecients ujk are then determined by the following linear system of
equations.
X
ujk a( jk ; `m ) = (f; `m ); for all ? 1 ` < J; m 2 D~ ` :
j;k
We now label the double indices linearly. Let : (j; k) 7! i be the labelling function which enumerate
(j; k) lexicographically. That is, (j; k) (`; m) if j ` or j = ` and k m. We then get a linear
16 J.-C. Xu and W.-C. Shann
system of the form Au = f , where the matrix A = (aij ) and the vector f = (fj ) have the entries
(4:16) aij = a( i; j ); fj = (f; j ):
Here a(; ) is the bilinear form dened in (4:4) and A corresponds to the stiness matrix in the
usual nite element method. Because i form a basis and a(; ) is symmetric and coercive on SJ ,
A is symmetric and positive denite.
If r(x) = 0 and q (x) = 1, then A is sparse. In fact, let P be the permutation matrix which
relabels jk in such a way that those with 0 k 2j R ? R (which supports lie entirely in [0; R])
are indexed together after the others, one can see that
B 0
(4:17) A = P 0 I P t:
Let N = dim SJ , then B is a full square matrix with order O(log N ). As a result, the eigenvalues
of A are mostly 1's, with a few exceptions caused by the wavelets near the boundary. Similar results
also hold if q (x) and r(x) are not constant. In fact, the following proposition can be proved by a
simple application of the Courant-Fischer Minimax Theorem [11].
Proposition 4.4. For general coecients q (x) and r(x) satisfying (4:2), all but O(log N ) of
the eigenvalues of A are uniformly bounded below and above.
If the conjugate gradient method is used to solve a linear system, one nds that if most of
the eigenvalues are clustered in one place, the convergence speed can be fast even if the condition
number is big.
Proposition 4.5. ([1]) Let uk be the k-th approximated solution of Au = f by the conjugate
gradient method. Suppose the spectrum of A = 0 [ 1 . If ` = j1 j and
.
= (max
0
) (min
0
):
Then p ? 1 k?`
ku ? uk kA M p + 1 ;
where k k2A = (A; ) and M is a constant depending on the spectrum of A.
By Proposition 4.4 and 4.5, the convergence of the conjugate gradient method for solving
Au = f will be uniform after O(log N ) steps. This is already very attractive since A is a matrix of
order N .
To further improve the performance of the conjugate gradient method, we have a natural choice
of preconditioner for the matrix A. Let B be the matrix in (4:17). Since B is small, it is easy
to invert B by a direct method. Evidently B and B ?1 are symmetric and positive denite, hence
there exists the Cholesky decomposition, say B ?1 = LLt . Then SS t is a good preconditioner where
L 0
(4:18) S=P 0 I :
That is, instead of solving Ax = b, we solve the equivalent problem
S tAS y = S t b; x = S y;
where S tAS is a well-conditioned matrix. The action of P can be easily derived during the compu-
tation, so we only have to compute and store the lower triangular matrix L which has O(log2 N )
Galerkin-wavelets methods 17
entries. Theoretically the condition number of S tAS is one if q = 1 and r = 0; in general it is
bounded above by q + r=q. The preconditioner S is made possible by the orthogonality property
of wavelets.
If (4:1) is implosed with mixed or Neumann boundary conditions, then it is preferred to
construct SJ by Jk in (3:12) or (3:13). We also denote the corresponding stiness matrix by A.
Since there are constantly many (2R ? 2) basis functions which do not lie in [0; R], all but O(1) of
A's eigenvalues are uniformly bounded. Finally, we remark that (1) the matrix A is sparse if r = 0,
and (2) we can construct a preconditioner in the same manner as S in (4:18).
4.4. Reduction of operations. Let A be the stiness matrix by means of the basis f jk g
in (4:13). If q is not a constant or r 6= 0, then A is a full matrix and its multiplication with a vector
becomes expensive. We can overcome this drawback by changing basis (as is done in hierarchical
basis, cf [28]). Recall that Jk in (4:14) also form a basis for SJ . We label Jk by k and let A^ be
the corresponding stiness matrix.
Consider the bases f jk g in (4:9) and fJk g in (4:8) for VJ (
). Let Q be the matrix of changing
basis. That is,
( 1 2 N ) = (1 2 N ) Q:
We shall show that Q has O(N ) nonzero entries. Since the construction from jk to jk is linear,
with little modications we can change from f jk g to fJk g in O(N ) operations.
To simplify
p the presentation and explain more clearly the main idea, we further ignore the
coecients 2j . Then the problem reduces to nding the matrix Q such that
0 (x + R ? 1) 1 0 (2J x ? R + 1) 1
B@ ..
. A = Qt B
C @ ..
.
C
A:
J ?
(2 x ? 2 R + p)
1 J ?1 J J
(2 x ? 2 R + 1)
Proposition 4.6. For any x 2 RN , O(N ) operations are sucient for computing the multi-
plication Qx.
Proof. By (2:5) and (2:7), it is easy to see that
0 (x + R ? 1) 1 0 (2x + R ? 1) 1
BB ..
.
C
C B
B C
C
BB (x ? R + 1) C C B
B ..
C
C
BB (x + R ? p) C C = Q~ t
1 B
B . C
C ;
BB ..
C
C B
B C
C
@ . A @ A
(x ? R + p) (2x ? 2R + 1)
where Q~ 1 is a matrix of order dim V1 (
) and each column of Q~ 1 consists of the coecients ck .
Hence there are at most 2p nonzero elements on each column of Q~ 1 . Inductively,
Q = Q J Q 2 Q1 :
Here, for any 1 j J , Q~ 0
Qj = 0j I ;
j
where Q~ j is a sparse matrix of order dim Vj (
) and Ij is the identity matrix of order dim VJ (
) ?
dim Vj (
). Since Q~ j has O(2j ) nonzero entries, a multiplication Qx needs
O(1) + O(2) + + O(2J ) = O(2 2J ) = O(N )
operations.
18 J.-C. Xu and W.-C. Shann
Because of the term xJk , Jk are fully supported in [0; R] and A^ is still full. This can be
modied by a further change of basis. Consider the functions
(4:19) Jk = J;k?1 ? Jk ; for 2 ? R k 2J R ? 1:
It is easy to show that spanfJk g SJ and that they are linearly independent. Counting the
dimension, they form another basis for SJ . Note that, for 0 k 2J R ? R, Jk are the same
constants and
suppJk = supp0Jk = suppJ;k?1 [ suppJk :
So all but 2R ? 2 of Jk and 0Jk are locally supported in [0; R]. Let A~ be the stiness matrix
corresponding to this basis, then A~ has O(N ) nonzero entries.
Therefore, there is a linear transformation T such that
~
A = T t AT:
For any x 2 RN , O(N ) operations are sucient to compute a multiplication T x. Since A~ is sparse,
we can reduce the number of operations for Ax to O(N ).
If (4:1) is imposed with mixed or Neumann boundary conditions and SJ is constructed by
fJk g. Then A is already sparse when r = 0. Otherwise A is full and we can change the basis
to Jk in the same manner as in (4:19).
4.5. Relations with hierarchical bases in nite elements. As we have mentioned in the
introduction, we now observe the connection between the wavelet basis (4:13) of the case p = 1
and the hierarchical basis [27] in one-dimensional piecewise linear nite element spaces. Let SJ be
the basis of H01 (
) dened in Theorem 3.1, where jk are dened in (4:13). Since p = 1, now
R = 2p ? 1 = 1 and
= (0; 1); so D~ ?1 = ;. For all k 2 Dj , j 0, we nd supp jk [0; 1], so
jk = 0. It is easy to see that jk is a hat function with support [k=2j ; (k + 1)=2j ] and height
p
1=2 2j . That is, the basis functions f jk g for SJ are the piecewise linear hierarchical basis functions
with the uniform mesh size h = 2?J .
On the other hand, consider Jk in (4:19). In this case J;k?1 ? Jk = 0. Observe that Jk ,
for 1 k 2J ? 1, are the scaled piecewise linear nodal basis with the uniform mesh size h = 2?J .
5. Numerical examples. In this section, we verify our claims on some test problems of the
form (4:1), with Dirichlet or Neumann boundary conditions. We have seen in section 4.4 that the
nite elements with p = 1 are exactly the hierarchical piecewise linear nite elements. So now we
test the methods with wavelets of order 2. That is, p = 2 and R = 3. The programs are coded in
FORTRAN. The tests are done on a SUN SparkStation 1.
We implement the functions and by their values at dyadic points in [0; R]; that is, the
values of (x) and (x) at x = k=2D , for 0 k 2D R (the functions vanish for k out of this range).
The data are stored in disk les. Then we temporarily generate and with resolution D + 1,
from which we compute the functions and . The integrations are done by Simpson's numerical
quadrature rule. Similarly we store the dyadic values of and in [0; R] with the common
denominator 2D . In practice, the choice of D will aect the accuracy of numerical quadratures.
The highest level J which can yield accurate solution is also related with the resolution D. To
choose D empirically, consider the problem
?u00 = 1; for x 2 (0; 3);
(5:1) u(0) = u(3) = 0:
Galerkin-wavelets methods 19
The solution u = ? 12 x2 + 23 x just lies in the subspace S0 , if the wavelets are of order p 2. Hence
the errors are due to the computations. The error ku ? uJ k0;
is approximately 10?8 for D = 8,
and 10?12 for D = 12. In most of the computations shown below, we set D = 8.
The function values of , etc., are repeatedly used during the computation, hence they are
read by the program to save some running time. Note that (x) = 1 for x R. This is the only
function involved which does not have a compact support. The programs must treat this exception
carefully.
We now test some Dirichlet boundary value problems. Let the nite element spaces SJ be
spanned by basis jk chosen from (4:13). As a rst example, we modify (5:1) a little in order to
observe the convergence order:
?u00 = 3x ? 3; for x 2 (0; 3);
(5:2) u(0) = u(3) = 0:
The computational results are summarized in Table 1.
(5:2) h?3 ku ? uJ k0;
h?2 ju ? uJ j1;
CG
J =0 0:09153 0:4750 3
J =1 0:08964 0:5307 5
J =2 0:09051 0:5598 7
J =3 0:09114 0:5766 8
J =4 0:08994 0:5811 10
Table 1.
In Table 1 and all of the following tables, h = 2?J and the columns labelled \CG" and \PCG"
list respectively the numbers of iterations needed for the conjugate gradient and preconditioned
conjugate gradient methods to obtain the displayed error. For problem (5:2), the preconditioner
SS t is exactly A?1 . Thus we get exact solutions with one iteration.
In the second example, we made up a problem with transcendental coecients:
(
(5:3) ?(e?x u0)0 = 3 e?x cos( 3 x) + 3 sin( 3 x) ; for x 2 (0; 3),
u(0) = u(3) = 0:
The computational results are summarized in Table 2. We also compare the numbers of iterations
by conjugate gradient method with or without the preconditioner.
(5:3) h?3 ku ? uJ k0;
h?2 ju ? uJ j1;
CG PCG
J =0 0:02546 0:0980 3 3
J =1 0:02622 0:1285 6 6
J =2 0:02502 0:1436 12 9
J =3 0:02509 0:1528 14 10
J =4 0:02457 0:1651 16 10
Table 2.
In order to observe the trend of converging in a picture, we tried a problem with higher fre-
quency solution, say u(x) = sin(x), where = 4=3. It is not necessary to have a complicated q (x)
20 J.-C. Xu and W.-C. Shann
0.8
0.6
0.4
0.2
-0.2
-0.4
-0.6
-0.8
-1
0 0.5 1 1.5 2 2.5 3
Figure 6. u0; - - u1 ; - - - u2 ; u.
for this issue, so we simply consider the problem ?u00 = 2 sin(x). Galerkin projections uJ with
0 J 2 are plotted in Figure 6. Visually there is no dierence between u and u3 at this scale.
Finally, let us try a Neumann boundary value problem:
?u00 = x ? 3 ; for x 2 (0; 3);
(5:4) 2
u0 (0) = u0 (3) = 0:
Now let SJ be constructed by the basis functions jk dened in (4:10). The convergence rates are
also as predicted by Theorem 4.1, as seen in Table 3.
(5:4) h?3 ku ? uJ k0;
h?2 ju ? uJ j1;
CG
J =0 0:03052 0:1582 3
J =1 0:02989 0:1768 5
J =2 0:02961 0:1857 6
J =3 0:02977 0:1911 7
J =4 0:02930 0:1916 9
Table 3.
REFERENCES
Galerkin-wavelets methods 21
[1] O. Axelsson and G. Lindskog, On the rate of convergence of the preconditioned conjugate
gradient method, Numer. Math., 48 (1986), pp. 499{523.
[2] A. P. Caldero n, Intermediate spaces and interpolation, the complex method, Studia Math.,
24 (1964), pp. 113{190.
[3] P. G. Ciarlet, \The nite element methods for elliptic problems," North-Holland, Amster-
dam, 1978.
[4] R. Coifman and G. Weiss, \Analyse Harmonique non commutative sur certains espaces
homogenes," Springer-Verlag, Berlin, 1971.
[5] J. M. Combes, A. Grossmann and Ph. Tchamitchian, eds., \Wavelets, time-frequency
methods and phase space, 2nd ed.," Springer-Verlag, Berlin, 1990.
[6] E. Cortina and S. M. Gomes, A wavelet based numerical method applied to free boundary
problems, IPE Technical Report, Sao Jose dos Campos, Brasil, 1989.
[7] I. Daubechies, Orthonormal bases of compactly supported wavelets, Comm. Pure Appl. Math.,
41 (1988), pp. 909{996.
[8] I. Daubechies and J. Lagarias, Two-scale dierence equations. I. Global regularity of so-
lutions, and {. II. Innite matrix products, local regularity and fractals, AT&T Bell Labo-
ratories, preprint.
[9] R. J. Duffin and A. C. Schaeffer, A class of nonharmonic Fourier series, Trans. Am.
Math. Soc., 72 (1952), pp. 341{366.
[10] R. Glowinski, W. M. Lawton, M. Ravachol and E. Tenenbaum, Wavelets solution
of linear and nonlinear elliptic, parabolic and hyperbolic problems in one space dimension,
preprint, AWARE Inc., Massachusetts, 1989.
[11] G. H. Golub and C. F. Van Loan, \Matrix computations," 2nd ed., Johns Hopkins Uni-
versity Press, Baltimore, 1988.
[12] P. Goupillaud, A. Grossmann and J. Morlet, Cycle-octave and related transforms in
seismic signal analysis, Geoexploration, 23 (1984/85), pp. 85{102.
[13] A. Grossmann, M. Holschneider, R. Kronland-Martinet and J. Morlet, Detection
of abrupt changes in sound signals with the help of wavelet transforms, in \Inverse problems:
an interdisciplinary study," Academic Press, London-Orlando, 1987, pp. 289{306.
[14] A. Grossmann and J. Morlet, Decomposition of Hardy functions into square integrable
wavelets of constant shape, SIAM J. Math., 15 (1984), pp. 723{736.
[15] A. Haar, Zur Theorie der orthogonalen Funktionensysteme, Math. Ann., 69 (1910), pp. 331{
371.
[16] C. Heil, Wavelets and frames, in \Signal processing, pt. 1, 2nd ed.," Spring-Verlag, New Yord,
1990, pp. 147{160.
[17] R. Kronland-Martinet, J. Morlet and A. Grossmann, Analysis of sound patterns
through wavelet transforms, to appear.
[18] A. Latto and E. Tenenbaum, Les ondelettes a support compact et la solution numerique
de l'equation de Burgers, preprint, AWARE Inc., Massachusetts, 1990.
[19] J. S. Lienard, \Speech analysis and reconstruction using short-time, elementary waveforms,"
LIMSI-CNRS, Orsay, France.
[20] S. Mallat, Multiresolution approximations and wavelet orthonormal bases of L2 (R), Trans.
Amer. Math. Soc., 315 (1989), pp. 69{87.
[21] S. Mallat, Multifrequency channel decompositions of images and wavelet models, IEEE Trans.
Acoust. Speech Signal Process, 37 (1989), pp. 2091{2110.
[22] Y. Meyer, \Principe d'incertitude, bases hilbertiennes et algebres d'operateurs," Bourbaki
Seminar, no. 662, 1985.
[23] Y. Meyer, \Ondelettes et operateurs, I. Ondelettes," Hermann, Paris, 1990.
22 J.-C. Xu and W.-C. Shann
[24] X. Rodet, Time-domain formant-wave-function synthesis, Computer Music J., 8 (1985),
[25] G. Strang, Wavelets and dilation equations: A brief introduction, SIAM Review, 31 (1989),
pp. 614{627.
[26] J. Stromberg, A modied Franklin system and higher-order systems of Rn as unconditional
bases for Hardy spaces, in \Conference on harmonic analysis in honor of Antoni Zygmund,"
Wadsworth, Belmont, 1983, pp. 475{493.
[27] H. Yserentant, On the multi-level splitting of nite element spaces, Numer. Math., 49 (1986),
pp. 379{412.
[28] O. C. Zienciewicz, D. W. Kelly, J. Gago and I. Babuska, Hierarchical nite element
approaches, error estimates and adaptive renement, in \The mathematics of nite elements
and applications IV," Academic Press, London, 1982, pp. 313{346.