functional_spline

Download as pdf or txt
Download as pdf or txt
You are on page 1of 46

6 Interpolation of Functions

6(a) Polynomial Interpolation


Read Ch. 3 (Section 3.5 optional)
Applications: approximation theory, numerical integration & differentia-
tion, integration of ODE’s & PDE’s.
Interpolation problem: Given distinct numbers x0 , x1 , . . . , xn (nodes) and
the corresponding function values f (x0 ) = y0 , f (x1 ) = y1 , . . . , f (xn ) = yn find
the function F in a given class F such that

F (xi ) = yi , i = 0, 1, . . . , n.

We consider mostly the case of polynomial interpolation, where Pm is the set


of polynomials over the real field with degree at most m:

P (x) = a0 + a1 x + · · · + an xm , ai ∈ R

If m = n, then we showed in Ch. 2 as a consequence of Hörner’s method that


there may be at most one solution. Directly, if P1 , P2 ∈ Pn are solutions,
then P1 − P2 ∈ Pn &

(P1 − P2 )(xi ) = 0, i = 0, . . . , n.

By the fundamental theorem, P1 , = P2 . Hence, solution, if it exists, is unique.


Uniqueness + linearity =⇒ existence.

a0 + a1 xi + · · · + an xni = yi , i = 0, 1, . . . , n
=⇒     
1 x0 x20 · · · xn0 a0 y0

 1 x1 x21 · · · xn1 
  a1  
   y1 


 1 x2 x22 · · · xn2 
  a2  = 
   y2 

.. .. .. .. .   ..
.   ..  
   
 . . . . 
1 xn x2n · · · xnn an yn
or
Va = y
Because this linear system has a unique solution for all y ∈ Rn+1 , the matrix
V must be nonsingular. Hence, the solution also exists.

1
Exercise. The matrix Vn (x0 , x1 , . . . , xn ) is called the Vandermonde matrix.
Show that Y
det Vn (x0 , x1 , . . . , xn ) = (xi − xj )
0≤i<j≤n

This gives a direct proof that Vn (x) is nonsingular if xi 6= xj , i 6= j. [Hint:


Show that det Vn (x) is a polynomial of degree n in xn , that its roots are
x0 , . . . , xn−1 , and the coefficient of xnn is det Vn−1 (x0 , x1 , . . . , xn−1 ).]
Theorem (Existence & Uniqueness). Given (n+1) distinct numbers x0 , x1 , . . . , xn ,
then for any y0 , y1 , . . . , yn , ∃!P ∈ Pn st. P (xi ) = yi , i = 0, . . . , n.
Lagrange interpolation: Given x0 , x1 , . . . , xn , the Lagrange polynomials
Li (x), i = 0, 1, . . . , n are defined by
Y x − xj
Li (x) =
j6=i
x i − xj

∴ Li ∈ Pn and

Li (xi ) = 1, Li (xj ) = 0, i 6= j.
Xn
=⇒ P (x) = yi Li (x)
i=0

is an explicit representation of the interpolating polynomial.


Error estimation:
Theorem. Let f ∈ C n+1 (I) where I = I[x0 , . . . , xn , x̄] is the smallest
interval which contains x̄ and all xi ’s. Then, ∃ξ ∈ I such that

f (n+1) (ξ)
f (x̄) − P (x̄) = (x̄ − x0 ) · · · (x̄ − xn )
(n + 1)!

Proof: The result is trivially true if x̄ = xi for some i. Hence, may assume
that x̄ 6= xi for any i.
Because x0 , x1 , . . . , xn , x̄ are all distinct, ∃ a constant K such that

F (x) ≡ f (x) − P (x) − K(x − x0 ) · · · (x − xn )

vanishes for x = x̄:


F (x̄) = 0.

2
Consequently, F (x) has at least the (n + 2) zeroes x0 , . . . , xn , x̄ in I. By the
generalized Rolle’s theorem, ∃ξ ∈ I, such that

F (n+1) (ξ) = 0.

Since P (n+1) (x) = 0,

0 = F (n+1) (ξ) = f (n+1) (ξ) − (n + 1)!K


(n+1)
=⇒ K = f (n+1)!(ξ) . QED
Exericse: Check that if xi = x0 + ih, h > 0 (uniform nodes), then

|(x − x0 ) · · · (x − xn )| ≤ n!hn+1

for x ∈ [x0 , xn ], and thus

hn+1
max |f (x) − P (x)| ≤ max |f (n+1) (x)|
x∈[x0 ,xn ] (n + 1) [x0 ,xn ]

However,

• Interpolants P (x) can be very poor approximations outside of the in-


terval [x0 , xn ].

• Even inside a fixed interval [a, b] = [x0 , xn ] with h = b−a


n
, the succes-
sive approximations Pn (x) with uniform nodes need not converge! The
1
Runge example is f (x) = 1+x 2 , −5 < x < 5:

3
It can be shown that for any 3.64 < |x| < 5,

lim |f (x) − Pn (x)| = +∞.


n→∞

See Isaacson & Keller (1966), pp. 275-279.


Although interpolants on uniform grids may fail to converge, it can be
shown that for any f ∈ C 1 [a, b], grid points {x0 , . . . , xn } may be chosen so
that lim |f (x) − Pn (x)| = 0 uniformly on [a, b].
n→∞
Neville’s Algorithm
For given support points (xi , yi ), i = 1, . . . , n, denote by

Pi0 i1 ...ik ∈ Pk

the unique interpolating polynomial for the subset (xi0 , yi0 ), . . . , (xik , yik ).
Theorem: Pi (x) = yi
and
(x − xi0 )Pi1 i2 ...ik (x) − (x − xik )Pi0 i1 ...ik−1 (x)
Pi0 i1 ...ik (x) =
xi k − xi 0

Proof: See proof of Theorem 3.5 in the text.


Leads to Neville’s algorithm to iteratively evaluate Pn (x) at any chosen
point x = x̄:

k=0 k=1 k=2 k=3

x0 y0 = P0 (x̄)
P01 (x̄)
x1 y1 = P1 (x̄) P012 (x̄)
P12 (x̄) P0123 (x̄)
x2 y2 = P2 (x̄) P123 (x̄)
P23 (x̄)
x3 y3 = P3 (x̄)

Abbreviate, for j ≥ k,

Qj,k = Pj−k,j−k+1,...,j

4
Tableau:
x0 y0 = Q00
Q11
x1 y1 = Q10 Q22
Q21 Q33
x2 y2 = Q20 Q32
Q31
x3 y3 = Q30
For efficient evaluation See Algorithm 3.1 of the text
Qi0 = yi
(x̄ − xj−k )Qj,k−1 − (x̄ − xj )Qj−1,k−1
Qjk =
xj − xj−k
Qj,k−1 − Qj−1,k−1 1 ≤ k ≤ j,
= Qj,k−1 +   ,
x̄−xj−k
− 1 j = 0, 1, . . . , n
x̄−xj

The special case with x̄ = 0 with be very useful later for extrapolation methods.
Newton’s Interpolation & Divided Differences
Note that
P0,1,...,n (x) − P0,1,...,n−1 (x)
is a polynomial of degree at most n with n roots x0 , x1 , . . . , xn−1 . Thus,
P0,1,...,n (x) = P0,1,...,n−1 (x) + an (x − x0 ) · · · (x − xn−1 )
Iteratively,
P (x) ≡ P0,1,...,n (x)
= a0 + a1 (x − x0 ) + a2 (x − x0 )(x − x1 ) + · · ·
· · · + an (x − x0 )(x − x1 ) · · · (x − xn−1 )
Useful for evaluation by Horner’s method.
With support points (x0 , y0 ), . . . , (xn , yn ),
Pi0 i1 ...ik (x) = Pi0 i1 ...ik−1 (x) + f [xi0 , xi1 , . . . , xik ](x − xi0 ) · · · (x − xik−1 )
= f [xi0 ] + f [xi0 , xi1 ](x − xi0 ) + · · ·
· · · + f [xi0 , xi1 , . . . , xik ](x − xi0 ) · · · (x − xik−1 )

5
The divided differences are defined iteratively by

f [xi ] = f (xi )
f [xi1 , . . . , xik ] − f [xi0 , . . . , xik−1 ]
f [xi0 , . . . , xik ] =
xi k − xi 0

Tableau: See Algorithm 3.2

x0 y0 = f [x0 ]
f [x0 , x1 ]
x1 y1 = f [x1 ] f [x0 , x1 , x2 ]
f [x1 , x2 ] f [x0 , x1 , x2 , x3 ]
x2 y2 = f [x2 ] f [x1 , x2 , x3 ]
f [x2 , x3 ]
x3 y3 = f [x3 ]

Theorem (Newton’s Interpolation Formula). If f ∈ C n+1 (I) for I =


I[x0 , . . . , xn , x], then for some ξ ∈ I,
n
X
f (x) = f (x0 ) + f [x0 , x1 , . . . , xk ](x − x0 ) · · · (x − xk−1 )
k=1
(∗)
(n+1)
f (ξ)
+ (x − x0 ) · · · (x − xn )
(n + 1)!

Proof: Follows from above & uniqueness of interpolation polynomial in


(n+1)
Pn . In fact, the formula holds with f (n+1)!(ξ) = f [x0 , . . . , xn , x], since if x is
added as an (n + 2)nd point, f (x) = Pn+1 (x) = Pn (x) + f [x0 , . . . , xn , x](x −
x0 ) · · · (x − xn ). QED
Theorem (Properties of Divided Differences).
(i) f [x0 , . . . , xn ] is symmetric in x0 , . . . , xn
(ii) For f ∈ C n (I[x0 , . . . , xn ]), ∃ξ ∈ I[x0 , . . . , xn ] such that

f (n) (ξ)
f [x0 , . . . , xn ] =
n!

6
(iii) For f ∈ C n (I[x0 , . . . , xn ]),
Z 1 Z 1
f [x0 , . . . , xn ] = dt1 . . . dtn f (n) (t0 x0 + · · · + tn xn )
0 0
n
X
ti ≤ 1
i=1
n
X
with t0 = 1 − ti . (Hermite-Gennochi formula)
i=1
Proof: (i) P0,1,...,n is uniquely determined by (x0 , y0 ), . . . , (xn , yn ) and
f [x0 , . . . , xn ) is the coefficient of its highest term.
(ii) Use (*) and the similar formula for n − 1
n−1
X
f (x) = f (x0 ) + f [x0 , . . . , xk ](x − x0 ) · · · (x − xk−1 )
k=1
f (n) (ξ 0 )
+ (x − x0 ) · · · (x − xn−1 )
n!
to infer that
f (n+1) (ξ)
f [x0 , . . . , xn ](x − x0 ) · · · (x − xn−1 ) + (x − x0 ) · · · (x − xn )
(n + 1)!
f (n) (ξ 0 )
= (x − x0 ) · · · (x − xn−1 )
n!
(n) 0
Setting x = xn , gives the result f [x0 , . . . , xn ] = f n!(ξ ) .
(iii) Exercise: Use induction and integration by parts. QED
The H-G formula shows that f [x0 , . . . , xn ] may be continuously extended
to points where the arguments coincide. For example,
f (n) (x0 )
f [x0 , . . . , x0 ] =
| {z } n!
(n+1)times

Also, if f ∈ C n+2 ,
d f [x0 , . . . , xn , x + h] − f [x1 x0 , . . . , xn , x]
f [x0 , . . . , xn , x] = lim
dx h→0 h
= lim f [x1 x0 , . . . , xn , x, x + h]
h→0
= f [x0 , . . . , xn x, x]

7
These facts are of particular utility in Hermite interpolation(see below).
Finite difference formulas: For sequences yi , i ∈ Z
(∆y)i = yi+1 − yi forward difference operator
(∇y)i = yi − yi−1 backward difference operator
Exercise:
n  
k n
X
n
(∆ y)0 = (−1) yn−k
k=0
k
and n  
k n
X
n
(∇ y)0 = (−1) y−k
k=0
k
where nk = k!(n−k)!
n!


For a uniform grid, xi = x0 + ih, h > 0,


1 ∆k f (x0 )
f [x0 , x1 , . . . , xk ] =
k! hk
and Check it!
k
1 ∇ f (xn )
f [xn−k , . . . , xn−1 , xn ] = .
k! hk
Also, with x = x0 + sh, (x − x0 ) · · · (x − xk−1 ) = s(s − 1) · · · (s − k + 1)hk so
that n  
X s Newton forward
Pn (x) = ∆k f (x0 )
k difference formula
k=0

and with x = xn − sh, (x − x0 ) · · · (x − xk−1 ) = (−1)k s(s − 1) · · · (s − k + 1)hk


so that
n  
k s Newton backward
X
Pn (x) = (−1) ∇k f (xn )
k difference formula
k=0

For points close to x0 , forward formula is most accurate & for those close to
xn , backward is best. In the middle, zigzag. See F. B. Hildebrand, 2nd ed.
(1974), for a complete discussion.
Hermite interpolation
Consider sample points
(k)
(xi , yi ), k = 0, 1, . . . , ni

8
for i = 0, 1, . . . , m with x0 < x1 < · · · < xm . The Hermite interpolation problem
is to find P ∈ Pn with
X m
n+1= (ni + 1)
i=0
such that

(*) (k)
P (k) (xi ) = yi , k = 0, 1, . . . , ni , i = 0, 1, . . . , m.

Theorem. (i) (Existence & Uniqueness) For arbitrary sample points x0 <
(k)
x1 < · · · < xm , yi , k = 0, 1, . . . , ni , i = 0, 1, . . . , m, ∃!P ∈ Pn which satisfies
(*).
(ii) (Error estimate) If f ∈ C n+1 (I[x0 , x1 , . . . , xm , x̄]), then
f (n+1) (ξ)
f (x̄) = P (x̄) + (x − x0 )n0 +1 · · · (x − xm )nm +1
(n + 1)!
for some ξ ∈ I[x0 , x1 , . . . , xm , x̄].
Proofs: Entirely analogous to those for Lagrange interpolation.
There is an explicit representation in terms of
ni
m X
X (k)
P (x) = yi Lik (x)
i=0 k=0

where Lik ∈ Pn are generalized Lagrange polynomials, satisfying



(s) 1 if i = j, k = s
Lik (xj ) =
0 o.w.
See Stoer & Bulirsch (1980), Section 2.1.5.
Also, generalizations of Neville’s algorithm & Newton’s formula exist:
E.g. the unique cubic polynomial which solves P (a) = f (a), P 0 (a) = f 0 (a)
and P (b) = f (b), P 0 (b) = f 0 (b) is obtained from
a f (a) = f [a]
f 0 (a) = f [a, a]
a f (a) = f [a] f [a, a, b]
f [a, b] f [a, a, b, b]
b f (b) = f [b] f [a, b, b]
f 0 (b) = f [b, b]
b f (b) = f [b]

9
as P (x) = f (a) = f 0 (a)(x−a)+f [a, a, b](x−a)2 +f [a, a, b, b](x−a)2 (x−b).
See Project #5 for examples. For general discussion, Stoer & Bulirsch,
Section 2.1.5.
Other important interpolation problems:
rational functions P (x)/Q(x)
trigonometric polynomials c0 + c1 eix + · · · + cn−1 ei(n−1)x
6(b) Trigonometric Interpolation
Theorem. Given N = 2n + 1 distinct points x0 , x1 , . . . , xN −1 ∈ T and
arbitrary numbers y0 , y1 , . . . , yNP
−1 , real or complex, there is a unique trigono-
metric polynomial T ∈ Tn := { nk=−n ck eikx : ck ∈ C} such that
T (xk ) = yk , k = 0, 1, . . . , N − 1.
Proof: The Haar condition for the system e2πikx , |k| ≤ n, is that the
following determinant not vanish:
e−inx0 e−inx1 ··· e−inxN −1
e−i(n−1)x0 e−i(n−1)x1 · · · e−i(n−1)xN −1
0 6= .. ..
. .
e−i(n−1)x0 ei(n−1)x1 ··· ei(n−1)xN −1
einx0 einx1 ··· einxN −1
1 1 ··· 1
eix0 eix1 ··· eixN −1
= e−in(x0 +x1 +···+xN −1 ) .. .. .
. .
ei(N −1)x0 ei(N −1)x1 · · · ei(N −1)xN −1
The latter is a Vandermonde determinant in the complex numbers z0 =
ix0
e , . . . , zN −1 = eixN −1 . Thus, the condition becomes
Y
0 6= e−in(x0 +x1 +···+xN −1 ) (eixk − eixj ),
k<j

which is true as long as xk 6= xj for k 6= j. QED


It may be shown (see Homework!) that the interpolating trigonometric polynomial
may be written as
N
X −1
T (x) = yj tj (x)
j=0

10
where tj (x) is the Fourier-Lagrange polynomial
x−xk

Y sin
2
tj (x) = xj −xk  .
k6=j
sin 2

Furthermore, if interpolation at an even number of points N = 2n is re-


quired, then it may be shown that a unique solution T (x) exists for trigono-
metric polynomials of the form
n−1
1 X
T (x) = a0 + [ak cos(2πkx) + bk sin(2πkx)] + dn cos(2πnx + φ)
2 k=1

for any choice of the phase φ. See A. Zygmund, Trigonometric Series, Chap-
ter X, Section 3. We usually take φ = 0, dn = an .
For f ∈ C(T), the unique polynomial In (f ) ∈ Tn of the above forms
which satisfies
In (f ; xk ) = f (xk ), k = 0, 1, . . . , N − 1
is called the interpolating trigonometric polynomial. It is interesting to ask
for the behavior of In (f ) as n → ∞. However, little is known except for the
following special case:
Interpolation on the uniform grid: Nodal points are taken at

k
xk = x0 + , k = 0, 1, . . . , N − 1
N
with x0 ∈ T arbitrary. The following is known:
Theorem. Let In (f ) be the interpolating trigonometric polynomial on a
uniform grid for f ∈ C(T). Then,

%n (f ) ≤ kf − In (f )k∞ ≤ C · log(n + 2) · %n (f )

for some constant C, with %n (f ) := inf Tn ∈Tn kf − Tn k∞ .


Proof: See A. Zygmund, Trigonometric Series, Ch. X, Section 5.
Contrast this behavior with that for ordinary polynomials (Runge phe-
nomenon). Trigonometric polynomials on uniform grids are much better
behaved (in fact, almost optimal).

11
There is a very elegant method of finding In (f ) for a uniform grid of N
points, based upon the following:
Discrete Fourier Series
For f = (f0 , f1 , . . . , fN −1 ) ∈ CN , define f̂ = F by
N −1
ˆ 1 X −2πik`/N
fk = f` e ≡ Fk , k ∈ Z
N `=0

This is called the discrete Fourier transform of f . Note that fˆk+N = fˆk for
all k ∈ Z. Thus, it is possible to view f̂ as a function on ZN , the integers
modulo N . Likewise, f may be periodically extended to Z, fk+N = fk , and
considered as a function on ZN . The following is basic:
Theorem. The set of functions
(N ) 1
e` (k) = √ e2πik`/N , ` = 0, 1, . . . , N − 1
N
is an orthonormal basis for `2 (ZN ). Thus, any function f ∈ `2 (ZN ) may be
expanded into a Fourier series
N
X −1
fk = fˆ` e2πik`/N
`=0

with
N −1
1 X
fˆ` = fk e−2πik`/N .
N `=0
There is a Parseval equality
N −1 N −1
1 X X
2
|fk | = |fˆ` |2 .
N k=0 `=0

Proof: For k = 0, 1, . . . , N − 1, the complex numbers


ωk ≡ e2πik/N
are the N th roots of unity. Hence, for k 6= 0,
N −1
X ωkN − 1
ωk` = = 0,
`=0
ωk − 1

12
while for k = 0
N
X −1 N
X −1
ω0` = 1 = N.
`=0 `=0

It follows that
N
X −1
e2πik`/N = N δk,0
`=0

Substituting k = n − m gives
N
X −1
e2πik`/N e2πim`/N = N δn,m ,
`=0

which is the first statement. The rest of the theorem follows easily. QED
Then there is:
Theorem. If In (f ) is the unique interpolating polynomial in Tn for the N
points
k
xk = , k = 0, 1, . . . , N − 1
N
(with N = 2n or 2n+1, according as N is even or odd), then In (f ) is given by
the discrete Fourier transform f̂ of the vector f = (f (x0 ), f (x1 ), . . . , f (xN −1 )),
as X
In (f, x) = fˆ` e2πi`x , N = 2n + 1 odd
|`|≤n

and X
In (f, x) = fˆ` e2πi`x + (Refˆn ) cos(2πnx), N = 2n even
|`|≤n−1

with the phase φ = 0 in the case N = 2n, even.


Proof: Consider first N = 2n + 1, odd. Since In ∈ Tn , as defined above,
we need only verify the interpolation property. However, as defined,
n
X
In (xk ) = fˆ` e2πi`xk
`=−n
n
X 2n
X
= fˆ` e2πi`k/N = e−2πink/N fˆ`−n e2πi`k/N
`=−n `=0

13
But
N −1 N −1
ˆ 1 X −2πi(`−n)p/N 1 X 2πinp/N
f`−n = f (xp )e = (e f (xp ))e−2πi`p/N
N p=0 N p=0

is just the discrete Fourier transform ĝ` of gp = e2πinp/N f (xp ). Thus,


In (xk ) = e−2πink/N gk = e−2πink/N · e2πink/N f (xk ) = f (xk ),
as claimed.
For N = 2n, even, an identical argument shows that both
n−1
X n
X
fˆ` e2πi`xk = fˆ` e2πi`xk = f (xk )
`=−n `=−(n−1)

Thus, also,
 
n−1 n
1 X X
In (xk ) = fˆ` e2πi`xk + fˆ` e2πi`xk  = f (xk )
2 `=−n
`=−(n−1)

We have used Re[fˆn e2πinxk ] = Re[fˆn ] cos(2πnxk ), since e2πinxk = cos(2πnxk ) =


(−1)k . QED
Equivalently,
Theorem. The trigonometric polynomials
n
a0 X
T (x) = + [a` cos(2π`x) + b` sin(2π`x)]
2 `=0

and
n−1
a0 X an
T (x) = + [a` cos(2π`x) + b` sin(2π`x)] + cos(2πnx)
2 `=0
2

for N = 2n+1 and N = 2n, respectively, satisfy T (xk ) = fk , k = 0, 1, . . . , N −


1 for xk = k/N if and only if the coefficients of T (x) are given by
N −1   N −1  
2 X 2πk` 2 X 2πk`
a` = fk cos , b` = fk sin .
N k=0 N N k=0 N

Proof: Obvious!

14
Aliasing Error: Consider a function of the form

f (x) = Ae2πi(p+sN )x

with |p| ≤ n, s ∈ Z, s 6= 0 and with, for simplicity, N = 2n + 1. Since

k
f (xk ) = e2πi(pxk +sk) = e2πpxk i for xk =
N
it follows that
In (f ; x) = Ae2πipx
Thus, all large wavenumbers p + sN, s 6= 0 are “folded back” to the
wavenumbers p, |p| ≤ n, in the interpolating polynomial.

This is called aliasing error in In (f ; x).


Fast Fourier Transform
Naively, the calculation of the discrete Fourier transform requires N 2
complex multiplications. However, with appropriate choices of N ,
e.g. N = 2n , n ∈ N,
the number of operations can be reduced to ∼ N log N . Such methods have
a long history, but first became popular in computing after the work of J. W.
Cooley & J. W. Tukey, Math. Comput. 19 297 (1965). For a history, see E.
O. Brigham, The Fast Fourier Transform (Englewood Cliffs, Prentice-Hall,
1974). There are two basic methods:
Cooley-Tukey method:
We follow the elegant derivation of Danielson-Lanczos (1942). To simplify
the derivation, we define
1 0
Fk = F , k = 0, 1, . . . , N − 1
N k

15
where
N
X −1
Fk0 = f` e−2πik`/N (∗)
`=0
1
has the normalization factor omitted. Recall N = 2n .
N
The basic observation is that Fk0 for each k can be written as the sum
of two discrete Fourier transforms, each of length N2 = 2n−1 . One is formed
from the even-numbered points of the original N , the other from the odd-
numbered ponts:
N/2−1 N/2−1
X X
−2πik(2`)/N
Fk0 = e f2` + e−2πik(2`+1)/N f2`+1
`=0 `=0  
N/2−1 N/2−1
X X
=  e−2πik/`(N/2) f2`  + ωnk  e−2πik`/(N/2) f2`+1 
`=0 `=0

  

  

9

n
≡ Fk00 + ωnk Fk01 with ωn ≡ e−2πi/2
Here k still runs over k = 0, 1, . . . , N − 1. However, if we note that
Fk00 , Fk01 are periodic to translations of k by N2 and that
k+ N
ωn 2
= ωnk · e−πi = −ωnk ,
then we obtain
0
fk+ 1
N
= Fk00 − ωnk Fk01 .
2

The two relations


(
Fk0 = Fk00 + ωnk Fk01 N
Rn 0 00 k 01 k = 0, 1, . . . , −1
Fk+ N = Fk − ωn Fk 2
2

with the definitions of Fk00 , Fk01 above, replace the previous expression (*) for
Fk0 , k = 0, 1, . . . , N − 1.
Notice that one multiplication is now required for the two terms, giving a
savings by a factor of 2 in number of operations. However, the main savings
in the algorithm comes from nesting of multiplications. (Cf. Horner’s method
for polynomials.) A careful operation count is given below.

16
The savings are increased by iterating. If we adopt the convention b−1 = 0,
then the general recursion may be written, for each m = n, n − 1, . . . , 1
(
b b ···b b b ···b 0 k b b ···b 1
Fk −1 0 n−m−1 = Fk −1 0 n−m−1 + ωm Fk −1 0 n−m−1
Rm b−1 b0 ···bn−m−1 b b ···b 0 k b b ···b 1
Fk+2 m−1 = Fk −1 0 n−m−1 − ωm Fk −1 0 n−m−1

with
2m−1
X−1
b b ···b m−1 )
Fk −1 0 n−m = e−2πik`/(2 f2n−m+1 `+(2n−m bn−m +···+2b1 +b0 )
`=0

for k = 0, 1, . . . , 2m−1 − 1. The quantities b0 , b1 , . . . , bn−1 are all bits, taking


values 0 or 1, which indicate whether the sublattice chosen is even or odd,
respectively. Exercise: Prove by induction!
At the final iteration, one obtains

F b b0 ···bn−1
F0 −1 = f2n−1 bn−1 +···+2b1 +b0

Note that 2n−1 bn−1 + · · · + 2b1 + b0 is nothing but the binary expansion
of k = 0, 1, . . . , 2n − 1. Therefore, the discrete Fourier transform Fk = fˆk of
fk can be obtained from the following algorithm:
b b ···b
Step 1: Initiate the array F0 −1 0 n−1 in bit-reversed order, using F.
b b ···b
Step 2: Use Rm for m = 1, . . . , n to calculate the arrays Fk −1 0 n−m−1 , k =
b b ···b
0, 1, . . . , 2m − 1 from the arrays Fk −1 0 n−m , k = 0, 1, . . . , 2m−1 − 1.
b
Step 3: Calculate Fk from Fk0 = Fk −1 via
1 0
Fk = Fk , k = 0, 1, . . . , 2n − 1.
N
Operation Count: Note that the relations Rm at stage m of Step 2
require multiplication of the factors indexed by 2m−1 values of k and 2n−m
values of (b0 , b1 , . . . , bn−m−1 , 1). This is

2m−1 × 2n−m = 2n−1 multiplications

for each m = 1, 2, . . . , n or
N
2n−1 · n = log2 N multiplications.
2

17
1
Including the N multiplications by the factor N
for each k = 0, 1, . . . , N − 1
in Step 3 gives a total of
N
log2 N + N multiplications
2
Sande-Tukey method: Another FFT method exists with the same oper-
ation count based upon an opposite idea, i.e. combining terms in the sum
rather than separating into subsums. See the Homework! This is sometimes
called decimation-in-frequency whereas the Cooley-Tukey method is called
decimation-in-time.
Additional Remarks on FFT: Many additional practical aspects of the
FFT are nicely discussed in Numerical Recipes [NR]:

• FFT’s can be carried out not only for N = 2n but for any factorization
N = N1 · · · Np . Highly optimized codings exist for taking small-N
discrete Fourier transforms, e.g. for N = 2, 3, 4, 5, 7, 8, 11, 13, 16, etc.,
called Winograd FFT’s. See NR, Section 12.2.

• If the function f is real, then

Fk∗ = F−k .

This symmetry can be used to further economize the FFT. See NR,
Section 12.3.

• There are also Fast-Fourier-Sine and Fast-Fourier-Cosine Transforms.


See NR, Section 12.3.

• Special bookkeeping methods can be employed to reduce unnecessary


copying of arrays in multidimensional FFT. See NR, Sections 12.4-12.5.

• It is possible to compute convolutions


Z 1
(f ∗ g)(x) = f (x − y)g(y)dy
0

with FFT, using the convolution theorem: (f ∗g)∧ (n) = fˆ(n)ĝ(n). This
can be done with algorithms which eliminate the bit-reordering steps
required in usual FFT. See NR, Section 13.1.

18
6(c) Rational Interpolation
Problems with polynomial interpolation:

• Functions with poles on the real interval of interpolation, such as


cos x
cot x = at x = 0 in [0, 2π]
sin x
or with poles in the complex plane very near the interval of interpola-
tion, such as
1 1
= at x = ±iε near [−1, 1]
ε2 +x 2 (x + iε)(x − iε)

are not accurately interpolated with polynomials.

• Polynomials usually oscillate, so that error bounds frequently exceed


the average approximation error by a significant amount.

Better interpolants are found with rational functions

Pm (x) a0 + a1 x + · · · + am x m
Rm,n (x) = =
Qn (x) b 0 + b1 x + · · · + bn x n

The pair [m, n] is called the degree-type, and N = m + n is called the index.
The number of free constants is N + 1, since one of the N + 2 constants is
able to be chosen arbitrarily. (A frequent convention is to take b0 = 1, when
that is possible.)
Interpolation problem: For N + 1 distinct points (xi , yi ), i = 0, 1, . . . , N
find Rm,n so that

A Rm,n (xi ) = yi , i = 0, 1, . . . , N

For any solution, it must hold that



 Pm (xi ) − yi Qn (xi ) = 0, or
B a0 + a1 x i + · · · + am x m n
i − yi (b0 + b1 xi + · · · + bn xi ) = 0
i = 0, 1, . . . , N

B is a necessary, but not sufficient, condition for A.

19
xi 0 1 2
Example: with m = n = 1
yi 1 2 2

a0 − 1 · b 0 = 0
a0 + a1 − 2(b0 + b1 ) = 0
a0 + 2a1 − 2(b0 + 2b1 ) = 0

has solution, unique up to a common constant factor, given by

a0 = 0, b0 = 0, a1 = 2, b1 = 1.

Thus, R1,1 (x) = 2x


x
, or equivalently, R
e1,1 (x) = 2. However, (x0 , y0 ) = (0, 1)
is missed. Thus, B is satisfied but A is not.
Moral: A rational interpolant of specified degree-type [m, n] may not
exist!
The notion of equivalence of rational expressions used above is that

P (i) (x)
R(1) (x) = , i = 1, 2 (Q(i) (x) 6≡ 0)
Q(i) (x)

are equivalent, or
R(1) ∼ R(2) ,
if
P (1) (x)Q(2) (x) = P (2) (x)Q(1) (x).
A rational expression is called relatively prime if its numerator P (x) and
denominator Q(x) are not both divisible by a common polynomial of positive
degree. Given the rational expression R(x) we denote by

R(x)
e

the corresponding equivalent relatively prime rational, obtained by cancelling


the common factors of P (x), Q(x). It is unique up to a common factor in the
numerator & denominator.
We now have the following theorem:
Theorem. In any rational interpolation problem, the solution of the linear
system B exists and is unique, in the following sense:

20
(i) The homogeneous linear system has nontrivial solutions and for each
such, Rm,n (x) = PQmn (x)
(x)
, Qn 6≡ 0, i.e. it defines a rational function.

(ii) Any two nontrivial solutions are equivalent:

R(1) ∼ R(2)

Proof: (i) Since the homogeneous linear system has n + m + 1 iterations


for n + m + 2 unknowns, there are solutions such that

(*) (a0 , . . . , am , b0 , . . . , bn ) 6= (0, . . . , 0, 0, . . . , 0).

In that case, also,


(b0 , . . . , bn ) 6= (0, . . . , 0).
Otherwise, from B

Pn (xi ) = yi Qm (xi ) = 0, i = 0, 1, . . . , n + m.

Since Pn has degree at most n, this would imply Pn (x) ≡ 0, contradicting


(*). Thus, Qn (x) 6≡ 0.
(ii) Given two solutions R(1) , R(2) , consider

S(x) := P (1) (x)Q(2) (x) − P (2) (x)Q(1) (x) ∈ Pm+n

Since S(xi ) = (yi Q(1) (xi ))Q(2) (xi )−(yi Q(2) (xi ))Q(1) (xi ) = 0, for i = 0, 1, . . . , n+
m, S(x) ≡ 0 and thus R(1) ∼ R(2) . QED
Important Remark: The converse to (ii) is false: not every rational ex-
pression R0 equivalent to a solution R of B is itself a solution. In the previous
example, we saw that R e failed to be a solution of B [and hence of A].
Since B is necessary for A, then either the solution R of B solves A, or
else A is not solvable at all. In the latter case, there must be support points
(xi , yi ) which are not hit by R. These are called inaccessible. Thus, A is
solvable iff there are no inaccessible points. The following theorem is basic:
Theorem: For any rational interpolation problem A, a solution exists iff,
for any solution R of B, its relatively prime version R e is also a solution of
B.
Proof: Suppose Rm,n (x) = PQmn (x) (x)
is a solution of B. Then a point xi , i =
0, 1, . . . , m + n is inaccessible if and only if Qn (xi ) = 0: otherwise, dividing

21
B by Qn (xi ) would give Rm,n (xi ) = yi . However, for any solution of B,
Qn (xi ) = 0 iff both Qn (xi ) = 0 and Pm (xi ) = 0, since Pm (xi ) = yi Qn (xi ). In
that case, Qn (x) & Pm (x) must have a common factor (x − xi ) and cannot be
relatively prime. Thus, an inaccessible point xi exists iff no solution Rm,n (x)
of B is relatively prime. QED.
A set of support points (xp , yp ), p = 0, 1, . . . , s are said to be in special position
if they are interpolated by a rational expression of degree type [k, `] with
k + ` < s.
Theorem. In a nonsolvable rational interpolation problem, the accessible
support points are in special position.
Proof: Let Rm,n be the solution of B and xi1 , . . . , xiα the inaccessible
points, a ≥ 1. Then, Pm (x) & Qn (x) have a common factor (x − xi1 ) · · · (x −
xia ) whose cancellation gives

Pm (x)
Pk (x) = ,k = m − a
(x − xi1 ) · · · (x − xia )
Qn (x)
Q` (x) = ,` = n − a
(x − xi1 ) · · · (x − xia )

so that
Pk (x)
Rk,` (x) ≡
Q` (x)
solves the interpolation problem for the m + n + 1 − a accessible points. Since

k + ` + 1 = m + n + 1 − 2a < m + n + 1 − a,

the accessible points are obviously in special position. QED


Thus, nonsolvable rational interpolation problems are highly nongeneric!

Algorithms to Solve Rational Interpolation Problems


(i) Inverse & Reciprocal Differences; Continued Fraction Representations
The algorithms discussed here solve for the rational interpolants Rm,n
along the main diagonal of the [m, n] plane:

22
m
n 0 1 2 3 ···

..
.
inverse differences are defined recursively
x0i` − x0i`+1
ϕ(xi1 , . . . , xi`−1 , x0i` , x0i`+1 ) =
ϕ(xi1 , . . . , xi`−1 x0i` ) − ϕ(xi1 , . . . , xi`−1 , x0i`+1 )
initiated with
xi − xj xi − xj
ϕ(xi , xj ) = =
yi − yj f (xi ) − f (xj )
Note: It is possible for ϕ = ∞, when denominators vanish. Also, ϕ(xi1 , . . . , xi` )
is generally not symmetric in its arguments xi1 , . . . , xi` .
Theorem. When the solution Rn,n (x) to a rational interpolation problem
of degree type [n, n] exists, it is represented by the continued fraction
Rn,n (x) = f0 + x − x0 |ϕ(x0 , x1 ) + x − x1 |ϕ(x0 , x1 , x2 )
+ x − x2 |ϕ(x0 , x1 , x2 , x3 ) + · · ·
· · · + x − x2n−1 |ϕ(x0 , x1 , x2 , . . . , x2n ) .
Proof: We use Pk , Qk generically to denote elements of Pk . Thus, if
Pn (x)
Rn,n (x) = Qn (x)
exists, it follows that Rn,n (x0 ) = f0 and thus

Rn,n (x) = f0 + Rn,n (x) − Rn,n (x0 )


Pn (x) Pn (x0 )
= f0 + −
Qn (x) Qn (x0 )
Pn−1 (x) x − x0
= f0 + (x − x0 ) = f0 + .
Qn (x) Qn (x)/Pn−1 (x)

23
Since also Rn,n (x1 ) = f1 , it follows that

Qn (x1 ) x1 − x0
= = ϕ(x0 , x1 )
Pn−1 (x1 ) f1 − f0
Hence, it follows in the same way that
Qn (x) Qn (x) Qn (x1 )
= ϕ(x0 , x1 ) + −
Pn−1 (x) Pn−1 (x) Pn−1 (x1 )
Qn−1 (x) x − x1
= ϕ(x0 , x1 ) + (x − x1 ) = ϕ(x0 , x1 ) +
Pn−1 (x) Pn−1 (x)/Qn−1 (x)
and thus
Pn−1 (x2 ) x2 − x 1
= = ϕ(x0 , x1 , x2 ).
Qn−1 (x2 ) ϕ(x0 , x2 ) − ϕ(x0 , x1 )
Continuing in this manner, one obtains
x − x0
Rn,n (x) = f0 +
Qn (x)/Pn−1 (x)
x − x0
= f0 + x−x1
ϕ(x0 , x1 ) + Pn−1 (x)/Q n−1 (x)
= ···
x − x0
= f0 + x − x1
ϕ(x0 , x1 ) +
ϕ(x0 , x1 , x2 )+
..
.
x − x2n−1
+
ϕ(x0 , x1 , . . . , x2n )
QED
The same argument shows as well that

Rn,n−1 (x) = f0 + x − x0 |ϕ(x0 , x1 ) + x − x1 |ϕ(x0 , x1 , x2 )


+ · · · + x − x2n−2 |ϕ(x0 , x1 , . . . , x2n−1 )

when the problem of degree type [n, n − 1] is solvable.


The inverse differences are not fully symmetric in their arguments, but
so-called reciprocal differences %(xi0 , . . . , xik ) can be defined iteratively as

24
%(xi0 , xi1 , . . . , xik−1 , xik ) =
(*) xi 0 − xi k
+ %(xi1 , . . . , xik−1 )
%(xi0 , . . . , xik−1 ) − %(xi1 , . . . , xik )

initiated as
%(xi ) = fi .
This may be arranged in a tableau:
x0 f 0
&
%(x0 , x1 )
% &
x1 f 1 −→ %(x0 , x1 , x2 )
& % &
%(x1 , x2 ) −→ %(x0 , x1 , x2 , x3 )
% & %
..
x2 f2 −→ %(x1 , x2 , x3 ) .
..
& % .
%(x2 , x3 )
..
% .
x3 f3
..
.
.. ..
. .

Theorem. (i) The reciprocal difference %(x0 , . . . , xk ) is symmetric in its


arguments; and (ii) for k ≥ 1

ϕ(x0 , x1 , . . . , xk ) = %(x0 , . . . , xk ) − %(x0 , . . . , xk−2 ).

Proof: See L. M. Milne-Thomson, The Calculus of Finite Differences (Lan-


don, MacMillan, 1993).

25
Combining the two theorems gives Thiele’s continued fraction representation

Rn,n (x) = f0 + x − x0 |%(x0 , x1 )


+ x − x1 |%(x0 , x1 , x2 ) − %(x0 )
+ ···
+ x − x2n−1 |%(x0 , . . . , x2n ) − %(x0 , . . . , x2n−2 )

and a similar representation for Rn,n−1 (x).


x i fi
0 0
&
−1
% &
1 −1 −→ − 12
& % &
Example:
3 −→ − 21
% & %
2 − 23 −→ − 19
14
& %
3
29
%
3 9

1 1
∴ f0 = 0, ϕ(x0 , x1 ) = −1, ϕ(x0 , x1 , x2 ) = − − 0 = − and
2 2
1 1
ϕ(x0 , x1 , x2 , x3 ) = − − (−1) = so that
2 2
1 1 −9x + 4x2
R2,1 (x) = 0 + x|−1 + x − 1|− + x − 2| =
2 2 7 − 2x
For computational purposes it is generally better to leave the solution
Rn,n or Rn,n−1 in the form of a continued fraction, rather than converting to
a rational fraction. In this way, the number of operations (multiplications
and divisions) is significantly reduced.

26
Operation Count

Degree type Rational fraction Continued fraction


[3, 3] 5 3
[4, 3] 6 4
[4, 4] 7 4
[5, 4] 7 5
[5, 5] 7 5
[6, 5] 8 6
[6, 6] 9 6
(However, number of divisions is greatly increased, so that rational frac-
tions maybe preferred if divisions are more costly!)
Note that the continued fraction formulation deserves emphasis because
the rational interpolants of type [n, n] and [n, n − 1] are generally the most
accurate approximations among all rational functions of the same index!
(ii) Algorithms of the Neville Type
We use
(s)
(s) Pm (x)
Rm,n (x) = (s)
Qn (x)
to denote the rational interpolant with
(s)
Rm,n (xi ) = fi , i = s, s + 1, . . . , s + m + n
Then, the following theorem gives a basis to a Neville-type algorithm to
(s)
generate the Rm,n :
Theorem: For m, n ≥ 1
(s+1) (s)
(s) (s+1) Rm−1,n (x) − Rm−1,n (x)
Rm,n (x) = Rm−1,n (x) +  (s+1) (s)

x−xs Rm,n−1 (x)−Rm−1,n (x)
x−xs+m+n
1 − (s+1) (s+1) −1
Rm−1,n (x)−Rm−1,n−1 (x)

and
(s+1) (s)
R (x) − Rm,n−1
(s)
Rm,n
(s+1)
(x) = Rm,n−1 (x) +  m,n−1 (s+1) (s)

x−xs Rm,n−1 (x)−Rm,n−1 (x)
x−xs+m+n
1 − (s+1) (s+1) −1
Rm,n−1 (x)−Rm−1,n−1 (x)

Proof: See Homework!

27
This can be used to calculate Rm,n along zig-zag paths:

m
n 0 1 2 3 ···

..
.

Preferred path: [0, 0] −→ [0, 1] −→ [1, 1] −→ [1, 2] −→ [2, 2] −→ · · ·


Along this path set
(s)
Tij (x) ≡ Rm,n (x) for i = s + m + n, j = m + n

The recursion becomes

Ti,−1 ≡ 0, Ti0 = fi
T − Ti−1,k−1
Ti,k = Ti,k−1 +  i,k−1  .
x−xi−k Ti,k−1 −Ti−1,k−1
x−xi
1 − Ti,k−1 −Ti−1,k−2 − 1

This is identical with the recursion in Nelville’s algorithm for polynomials,


except that there is the term (· · ·) = 1.

28
In a tableau this is:

[0, 0]
f0 = T00 [0, 1]
&
T11 [1, 1]
% &
f1 = T10 −→ T22 [1, 2]
& &
T21 −→ T33
..
% & % .
f2 = T20 −→ T32
..
& % .
..
T31 .
%
f3 = T30
..
.
..
.
..
.

This algorithm is especially useful if the values of the rational interpolant


are required at a particular point x∗ , rather than the rational function itself
(i.e. its coefficients).

An important modern application of the algorithm is in extrapolation


methods to solve initial-value problems for ODE’s or Bulirsch-Stoer methods.
This is discussed in Numerical Recipes, Section 16.4 and in C. W. Gear,
Numerical Initial-Value Problems in ODE’s (Prentice-Hall, 1971), pp. 93-
101. It is perhaps the best known way to obtain high-accuracy solutions to
ODE’s with minimal computational effort!

29
6(d) Piecewise Polynomial Interpolation & Splines
Given a partition
 
xi are called
∆: a = x0 < x1 < · · · < xm = b  knots, breakpoints, 
or nodes
(n)
of an interval [a, b], the Hermite function space H∆ is defined to be the space
(n)
of functions as [a, b] such that ϕ ∈ H∆ iff

(i) ϕ ∈ C n [a, b]
(ii) ϕb[xi ,xi+1 ] ∈ P2n+1

One can choose Pi = ϕb[xi ,xi+1 ] ∈ P2n+1 to be the Hermite interpolating poly-
nomial with
(k) (k)
Pi (xi ) = f (k) (xi ) & Pi (xi+1 ) = f (k) (xi+1 )

for k = 0, 1, . . . , n. This is widely used with n = 1 and the cubic Hermite


polynomials considered in Project #5.
It can be shown that if f ∈ C 2n+2 [a, b], then

(k) |(x − xi )(x − xi+1 )|n−k+1


max |f (k) (x) − Pi (x)| ≤ (xi+1 − xi )k
x∈[xi ,xi+1 ] k!(2n − 2k + 2)!
× max |f (2n+2) (x)|
x∈[xi ,xx+1 ]

and thus
k∆k2n−k+2
kf (k) − ϕ(k) k∞ ≤ kf (2n+2) k∞
22n−2k+2 k!(2n − 2k + 2)!
for k = 0, 1, . . . , n+1 and k∆k = max |xi+1 −xi |. Unlike simple polynomial
0≤i≤m−1
interpolation, error goes to zero as k∆k → 0! See Ciarlet, Schultz & Varga,
Num. Math. 9 394 (1967).
Exercise: Show directly from the error estimate on Hermite interpolation
that the k = 0 bound is true, i.e.
k∆k2n+2
kf − ϕk∞ ≤ 2n+2
kf (2n+2) k∞ .
2 (2n + 2)!

30
Note kf k∞ = max |f (x)| for f ∈ C[a, b].
x∈[a,b]
Thus, piecewise polynomial interpolation is convergent!
However, use of Hermite interpolants requires knowledge of derivatives
such as f 0 (xi ), f 0 (xi+1 ) to calculate Pi = ϕb[xi ,xi+1 ] . (although this is employed
for Bézier curves in computer graphics: see Section 3.5)
Definition. Given a partition ∆ : a = x0 < x1 < · · · < xn = b of
(m)
an interval [a, b], the space of spline functions of order m S∆ consists of
S : [a, b] → R such that

(i) S ∈ C m−1 [a, b]


(ii) Sb[xi ,xx+1 ] ∈ Pm
(m) (m−1) (m+1)
Note: If S ∈ S∆ , then S 0 ∈ S∆
R
and S ∈ S∆ .
The most common case, m = 3, are cubic splines. They are finding
applications in graphics and, increasingly, in numerical methods. For in-
stance, spline functions may be used as trial functions in connection with the
Rayleigh-Ritz-Galerkin method for solving boundary-value problems of ordi-
nary and partial differential equations. Introdutions are for instance Greville
(1969), Schultz (1973), Böhmer (1974), and de Boor (1978).
Theoretical Foundations
Consider a set Y := {y0 , y1 , . . . , yn } of n + 1 real numbers. We denote by

S∆ (Y ; ·)

an interpolating cubic spline function S∆ with S∆ (Y ; xi ) = yi , i = 0, 1, . . . , n.


Such an interpolating spline function S∆ (Y ; ·) is not uniquely determined
by the set Y of support ordinates. Roughly speaking, there are still two
degrees of freedom left, calling for suitable additional requirements. The
following three additional requirements are most commonly considered:
00 00
(a) S∆ (Y ; a) = S∆ (Y ; b) = 0,
(k) (k)
(b) S∆ (Y ; a) = S∆ (Y ; b) for k = 0, 1, 2 : S∆ (Y ; ·) is periodic,
0
(c) S∆ (Y ; a) = y00 , S∆0
(Y ; b) = yn0 for given numbers y00 , yn0 .
We will confirm that each of these three conditions by itself ensures
uniqueness of the interpolating spline function S∆ (Y ; ·). A prerequisite of
the condition (b) is, of course, that yn = y0 .
For this purpose, and to establish a characteristic minimum property of
spline functions, we consider the sets for integer m > 0

31
Km [a, b] = {f : [a, b] → R : f (m) ∈ L2 [a, b]}
By Kpm [a, b] we denote the set of all functions in Km [a, b] with f (k) (a) = f (k) (b)
for k = 0, 1, . . . , m − 1. We call such functions periodic, because they arise
as restrictions to [a, b] of functions which are periodic with period b − a.
Note that S∆ ∈ K3 [a, b], and that S∆ (Y ; ·) ∈ Kp3 [a, b] if (b) holds.
If f ∈ K2 [a, b], then we can define
Z b
2
kf k := |f 00 (x)|2 dx.
a

Note that kf k ≥ 0. However, kf k = 0 may hold for functions which are


not identically zero, for instance, for all linear functions f (x) ≡ cx + d. The
function k · k is called a seminorm (in this case, the Sobolev seminorm).
We proceed to show a fundamental identity due to Holladay [see for in-
stance Ahlberg, Nilson, and Walsh (1967)].
Lemma. If f ∈ K2 (a, b) if ∆ = {a = x0 < x1 < · · · < xn = b} is
a partition of the interval [a, b], and if S∆ is a spline function with knots
xi ∈ ∆, then
kf − S∆ k2 = kf k2 − kS∆ k2
" n
#
b X x−
i
−2 (f 0 (x) − S∆
0 00
(x))S∆ (x) − 000
(f (x) − S∆ (x))S∆ (x) .
a x+
i−1
i=1
u
000
Here g(x) stands for g(u) − g(v). Since S∆ (x) is piecewise constant with
v
possible discontinuities at the knots x1 , . . . , xn−1 , we have to use the left and
000
right limits of S∆ (x) at the locations xi and xi−1 , respectively, in the above
formula. This is indicated by the notation x− +
i , xi−1 .
Proof: By the definition of k · k,
Z b
2
kf − S∆ k = |f 00 (x) − S∆00
(x)|2 dx
a
Z b
2
= kf k − 2 f 00 (x)S∆00
(x)dx + kS∆ k2
a
Z b
= kf k2 − 2 (f 00 (x) − S∆ 00 00
(x))S∆ (x)dx − kS∆ k2 .
a

32
Integration by parts gives for i = 1, 2, . . . , n
Z xi xi
(f 00 (x) − S∆
00 00
(x))S∆ (x)dx = (f 0 (x) − S∆ 0 00
(x))S∆ (x)
xi−1 xi−1
Z xi
− (f 0 (x) − S∆0
(x))S∆ 000
(x)dx
xi−1
xi x−
i
0 0 00 000
= (f (x) − S∆ (x))S∆ − (f (x) −
(x) S∆ (x))S∆ (x)
xi−1 x+
i−1
Z xi
(4)
+ (f (x) − S∆ (x))S∆ (x)dx.
xi−1

Now S (4) (x) ≡ 0 on the subintervals (xi−1 , xi ), and f 0 , S∆


0 00
, S∆ are continuous
on [a, b]. Adding these formulas for i = 1, 2, . . . , n yields the proposition of
the theorem, since
n
X xi b
(f 0 (x) − S∆
0 00
(x))S∆ (x) = (f 0 (x) − S∆
0 00
(x))S∆ (x) . 
xi−1 a
i=1

With the help of this theorem we will prove the important minimum-norm
property of spline functions.
Theorem. Given a partition ∆ := {a = x0 < x1 < · · · < xn = b} of
the interval [a, b], values Y := {y0 , . . . , yn } and a function f ∈ K2 [a, b] with
f (xi ) = yi , for i = 0, 1 . . . , n, then kf k2 ≥ kS∆ (Y ; ·)k2 , and more precisely

kf k2 − kS∆ (Y ; ·)k2 = kf − S∆ (Y ; ·)k2 ≥ 0

holds for every spline function S∆ (Y ; ·), provided one of the conditions
00 00
(a) S∆ (Y ; a) = S∆ (Y ; b) = 0,
(b) f ∈ Kp2 (a, b), S∆ (Y ; ·) periodic,
(c) f 0 (a) = S∆ 0
(Y ; a), f 0 (b) = S∆
0
(Y ; b),
is met. In each of these cases, the spline function S∆ (Y ; ·) is uniquely
determined.
The existence of such spline functions will be shown later.
Proof: In each of the above three cases (a, b, c), the expression
b n x−
X i
0 0 00 000
(f (x) − S∆ (x))S∆ (x)) − (f (x) − S∆ (x))S∆ (x) =0
a x+
i−1
i=1

33
vanishes in the Holladay identity if S∆ ≡ S∆ (Y ; ·). This proves the minimum
property of the spline function S∆ (Y ; ·). Its uniqueness can be seen as follows:
suppose S̄∆ (Y ; ·). Letting S̄∆ (Y ; ·) play the role of the function f ∈ K2 [a, b]
in the theorem, the minimum property of S∆ (Y ; ·) requires that

kS̄∆ (Y ; ·) − S∆ (Y ; ·)k2 = kS̄∆ (Y ; ·)k2 − kS∆ (Y ; ·)k2 ≥ 0,

and since S∆ (Y ; ·) and S̄∆ (Y ; ·) may switch roles,


Z b
2 00 00
kS̄∆ (Y ; ·) − S∆ (Y ; ·)k = (S̄∆ (Y ; x) − S∆ (Y ; x))2 dx = 0.
a

00 00
Since S∆ (Y ; ·) and S̄∆ (Y ; ·) are both continuous,
00 00
S̄∆ (Y ; x) ≡ S∆ (Y ; x),

from which
S̄∆ (Y ; x) ≡ S∆ (Y ; x) + cx + d
follows by integration. But S̄∆ (Y ; x) = S∆ (Y ; x) holds for x = a, b, and this
implies c = d = 0. 
The minimum-norm property of the spline function expressed in the
theorem implies in case (a) that, among all functions f in K2 [a, b] with
f (xi ) = yi , i = 0, 1, . . . , n, it is precisely the spline function S∆ (Y ; ·) with
00
S∆ (Y ; x) = 0 for x = a, b that minimizes the integral
Z b
2
kf k = |f 00 (x)|2 dx.
a

The spline function of case (a) is often referred to as the natural spline.
In case (b) with kf k minimized over the more restricted set Kp2 [a, b], the
function is called the periodic spline, and in case (c) minimizing over {f ∈
K2 [a, b]|f 0 (a) = y00 , f 0 (b) = yn0 }, the clamped spline.
The expression f 00 (x)(1 + f 0 (x)2 )−3/2 indicates the curvature of the func-
tion f (x) at x ∈ [a, b]. If f 0 (x) is small compared to 1, then the curvature is
approximately equal to f 00 (x). The value kf k provides us therefore with an
approximate measure of the total curvature of the function f in the interval
[a, b]. In this sense, the natural spline function is the “straightest” function
to interpolate given support points (xi , yi ), i = 0, 1, . . . , n.

34
Determining Interpolating Spline Functions
In this section, we will describe computational methods for determining
cubic spline functions which assume prescribed values at their knots and sat-
isfy one of the side conditions (a,b,c). In the course of this, we will have also
proved the existence of such spline functions; their uniqueness has already
been established.
In what follows, ∆ = {xi |i = 0, 1, . . . , n} will be a fixed partition of the
interval [a, b] by knots a = x0 < x1 < · · · < xn = b, and Y = {yi |i =
0, 1, . . . , n} will be a set of n + 1 prescribed real numbers. In addition let

hj+1 := xj+1 − xj , j = 0, 1, . . . , n − 1.

We refer to the values of the second derivatives at knots xj ∈ ∆,

00
Mj := S∆ (Y ; xj ), j = 0, 1, . . . , n,

of the desired spline function S∆ (Y ; ·) as the moments Mj of S∆ (Y ; ·). We


will show that spline functions are readily characterized by their moments,
and that the moments of the interpolating spline function can be calculated
as the solution of a system of linear equations.
00
Note that the second derivative S∆ (Y ; ·) of the spline function coin-
cides with a linear function in each interval [xj , xj+1 ], j = 0, . . . , n − 1, and
that these linear functions can be described in terms of the moments Mi of
S∆ (Y ; ·):

00 xj+1 − x x − xj
S∆ (Y ; x) = Mj + Mj+1 for x ∈ [xj , xj+1 ].
hj+1 hj+1

By integration,

0 (xj+1 − x)2 (x − xj )2
S∆ (Y ; x) = −Mj + Mj+1 + Aj ,
2hj+1 2hj+1
(xj+1 − x)3 (x − xj )3
S∆ (Y ; x) = Mj + Mj+1 + Aj (x − xj ) + Bj ,
6hj+1 6hj+1

for x ∈ [xj , xj+1 ], j = 0, 1, . . . , n − 1, where Aj , Bj are constants of inte-


gration. From S∆ (Y ; xj ) = yj , S∆ (Y ; xj+1 ) = yj+1 , we obtain the following

35
equations for these constants Aj and Bj :
h2j+1
Mj +Bj = yj ,
62
hj+1
Mj+1 + Aj hj+1 +Bj = yj+1 .
6
Consequently,

h2j+1
Bj = yj − Mj ,
6
yj+1 − yj hj+1
Aj = − (Mj+1 − Mj ).
hj+1 6
This yields the following representation of the spline function in terms of
its moments:
S∆ (Y ; x) = αj + βj (x − xj ) + γj (x − xj )2 + δj (x − xj )3 for x ∈ [xj , xj+1 ],
where
αj : = yj ,
Mj
γj : = ,
2
0 Mj hj+1
βj : = S∆ (Y ; xj ) = − + Aj
2
yj+1 − yj 2Mj + Mj+1
= − hj+1 ,
hj+1 6
S∆000
(Y ; x+
j ) Mj+1 − Mj
δj = = .
6 6hj+1
Thus S∆ (Y ; ·) has been characterized by its moments Mj . The task of cal-
culating these moments will now be addressed.
0
The continuity of S∆ (Y ; ·) at the knots x = xj , j = 1, 2, . . . , n−1 [namely,

0
the relations S∆ (Y ; xj ) = S∆ 0
(Y ; x+
j )] yields n − 1 equations for the moments
Mj . Substituting the values for Aj and Bj gives
0 (xj+1 − x)2 (x − xj )2
S∆ (Y ; x) = −Mj + Mj+1
2hj+1 2hj+1
yj+1 − yj hj+1
+ − (Mj+1 − Mj ).
hj+1 6

36
For j = 1, 2, . . . , n − 1, we have therefore
yj − yj−1 hj hj
0
S∆ (y; x−
j ) = + Mj + Mj−1 ,
hj 3 6
0 yj+1 − yj hj+1 hj+1
S∆ (Y ; x+
j ) = − Mj − Mj+1 ,
hj+1 3 6

0
and since S∆ (Y ; x+ 0
j ) = S∆ (Y ; xj ),

hj hj + hj+1 hj+1 yj+1 − yj yj − yj−1


Mj−1 + Mj + Mj+1 = − (∗)
6 3 6 hj+1 hj

for j = 1, 2, . . . , n − 1. These are n − 1 equations for the n + 1 unknown


moments. Two further equations can be gained separately from each of the
side conditions (a), (b), and (c).
00 00
Case (a): S∆ (Y ; a) = M0 = 0 = Mn = S∆ (Y ; b).
00 00
Case (b): S∆ (Y ; a) = S∆ (Y ; b) ⇒ M0 = Mn ,
0 0 hn hn + h1 h1
S∆ (Y ; a) = S∆ (Y ; b) ⇒ Mn−1 + Mn + M1
6 3 6
y1 − yn yn − yn−1
= − .
h1 hn
The latter condition is identical with (*) for j = n if we put

hn+1 := h1 , Mn+1 := M1 , yn+1 := y1 .

Recall that (b) requires yn = y0 .

0 h1 h1 y1 − y0
Case (c): S∆ (Y ; a) = y00 ⇒ M0 + M1 = − y00 ,
3 6 h1
0 h n hn yn − yn−1
S∆ (Y ; b) = yn0 ⇒ Mn−1 + Mn = yn0 − .
6 3 hn
These last two equations in cases (a-c), as well as those in (*), can be
written in a common format:

µj Mj−1 + 2Mj + λj Mj+1 = dj , j = 1, 2, . . . , n − 1,

37
upon introducing the abbreviations

hj+1 hj
λj := , µj := 1 − λj = 

hj + hj+1  hj + hj+1

6 yj+1 − yj yj − yj−1 j = 1, 2, . . . , n − 1.
dj := −


hj + hj+1 hj+1 hj

In case (a), we define in addition


λ0 := 0, d0 := 0, µn := 0, dn := 0,
and in case (c)
 
6 y1 − y0 0
λ0 := 1, d0 := − y0 ,
h1 h1
 
6 0 yn − yn−1
µn := 1, dn := yn − .
hn hn
This leads in cases (a) and (c) to the following system of linear equations
for the moments Mi :
2M0 + λ0 M1 = d0 ,
µ1 M0 + 2M1 + λ1 M2 = d1 ,
. . . .
. . . .
. . . .
. . . .
. . . .
µn−1 Mn−2 + 2Mn−1 + λn−1 Mn = dn−1 ,
µn Mn−1 + 2Mn = dn .
In matrix notation, we have
    
2 λ0 0 M0 d0
 µ1 2 λ1  M1   d1 
    

 µ2 · · 
 ·  
= · 
.

 · · · 
 ·  
  · 

 · · 2 λn−1  ·   · 
0 µn 2 Mn dn

38
The periodic case (b) also requires further definitions,
h1 hn
λn := , µn := 1 − λn = ,
hn + h1 hn + h1
 
6 y1 − yn yn − yn−1
dn := − ,
hn + h1 h1 hn
which then lead to the following linear system of equations for the mo-
ments M1 , M2 , . . . , Mn (= M0 ):
     
2 λ1 µ1 M1 d1
 µ1 2 λ2   M2   d2 
     
 µ 3 · ·   ·   · 
.
  ·  =  · .
   

 · · ·     
 · 2 λn−1   ·   · 
λn µn 2 Mn dn

The coefficients λi , µi , di depend only on the location of the knots xj ∈ ∆


and not on the prescribed values yi ∈ Y nor on y00 , yn0 in case (c).
Theorem. The above systems of linear equations are nonsingular for any
partition ∆ of [a, b].
This means that the above systems of linear equations have unique so-
lutions for arbitrary right-hand sides, and that consequently the problem of
interpolation by cubic splines has a unique solution in each of the three cases
(a), (b), (c).
Proof: Consider the (n + 1) × (n + 1) matrix
 
2 λ0 0
 µ1 2 λ1 
 

 · · 

 µ 2 · · 
A= 

 · · · 


 · 2 λn−1


 · 
0 µ2 2

of the linear system. Note from their definitions that

λi ≥ 0, µi ≥ 0, λi + µi = 1

39
for all coefficients λi , µi . Hence, A is strictly diagonally dominant and its
nonsingularity follows directly.
However, for later purposes we shall require also the following stronger
property:
kAzk∞ ≥ kzk∞
for every vector z = (z0 , . . . , zn )T . Indeed, let r be such that |zr | = maxi |zi |
and w := Az. Then,

µr zr−1 + 2zr + λr zr+1 = wr (µ0 := 0, λn := 0)

By the definition of r and because µr + λr = 1,

max |wi | ≥ |wr | ≥ 2|zr | − µr |zr−1 | − λr |zr+1 |


i
≥ 2|zr | − µr |zr | − λr |zr |
= (2 − µr − λr )|zr |
= |zr | = max |zi |
i


To solve the equations, we may proceed as follows: subtract µ1 /2 times
the first equation from the second, thereby annihilating µ1 , and then a suit-
able multiple of the second equation from the third to annihilate µ2 , and so
on. This leads to a “triangular” system of equations which can be solved in a
straightforward fashion. Note that this method is the Gaussian elimination
algorithm for tridiagonal matrices.

q0 := −λ0 /2; u0 := d0 /2; λn := 0;


for k := 1, 2, . . . , n do
begin pk := µk qk−1 + 2;
qk := −λk /pk ;
µk := (dk − µk uk−1 )/pk end;
Mn := un ;
for k := n − 1, n − 2, . . . 0 do
Mk := qk Mk+1 + uk ;

It can be shown that pk > 0, so that qk , µk are well defined. (Exercise). The
linear system in case (b) can be solved in a similar, but not as straightforward,

40
fashion. An ALGOL program by C. Reinsch can be found in Bulirsch and
Rutishauser (1968).
The reader can find more details in Greville (1969) and de Boor (1972),
ALGOL programs in Herriot and Reinsch (1971), and FORTRAN programs
in de Boor (1978). These references also contain information and algorithms
m
for the higher spline functions S∆ , m ≥ 2, and other generalizations.
Convergence Properties of Spline Functions
Interpolating polynomials may not converge to a function f whose values
they interpolate, even if the partitions ∆ are chosen arbitrarily fine (recall the
Runge example). In contrast, we will show in this section that, under mild
conditions on the function f and the partitions ∆, the interpolating spline
functions do converge towards f as the fineness of the underlying partitions
approaches zero.
We will show first that the moments of the interpolating spline function
converge to the second derivatives of the given function. More precisely,
consider a fixed partition ∆ = {a = x0 < x1 < · · · < xn = b} of [a, b], and let
 
M0
M =  ... 
 
Mn

be the vector of moments Mj of the spline function S∆ (Y ; ·) with f (xj ) = yj


for j = 1, . . . , n − 1, as well as

S∆ (Y ; a) = f 0 (a), S∆ (Y ; b) = f 0 (b).

We are thus dealing with case (c). The vector M of moments satisfies the
equation
AM = d,
which expresses the linear system of equations in matrix form. The compo-
nents of d are the dj previously defined. Let F and r be the vectors
 
f 00 (x0 )
 f 00 (x1 ) 
F :=   , r := d − AF = A(M − F).
 
..
 . 
00
f (xn )

41
Writing kzk∞ := maxi |zi | for vectors z, and k∆k for the fineness
k∆k := max |xj+1 − xj |
j

of the partition ∆, we have:


Proposition: If f ∈ C 4 [a, b] and |f (4) (x)| ≤ L for x ∈ [a, b], then
3
kM − Fk∞ ≤ krk∞ ≤ Lk∆k2 .
4
Proof: By definition, r0 = d0 − 2f (x0 ) − f 00 (x1 ), and by definition of d0
00
 
6 y1 − y0
r0 = 0
− y0 − 2f 00 (x0 ) − f 00 (x1 ).
h1 h1
Using Taylor’s theorem to express y1 = f (x1 ) and f 00 (x1 ) in terms of the
value and the derivatives of the function f at x0 yields
h21 000 h31 (4)
 
6 0 h1 00 0
r0 = f (x0 ) + f (x0 ) + f (x0 ) + f (τ1 ) − f (x0 )
h1 2 6 24
2
 
00 00 000 h1 (4)
−2f (x0 ) − f (x0 ) + h1 f (x0 ) + f (τ2 )
2
2 2
h h
= 1 f (4) (τ1 ) − 1 f (4) (τ2 )
4 2
with τ1 , τ2 ∈ [x0 , x1 ]. Therefore
3
|r0 | ≤ Lk∆k2 .
4
Analogously, we find for
rn = dn − f 00 (xn−1 ) − 2f 00 (xn )
that
3
|rn | ≤ Lk∆k2 .
4
For the remaining components of r = d − AF, we find similarly
rj = dj − µj f 00 (xj−1 ) − 2f 00 (xj ) − λi f 00 (xj+1 )
 
6 yj+1 − yj yj − yj−1
= −
hj + hj+1 hj+1 hj
hj hj+1
− f 00 (xj−1 ) − 2f 00 (xj ) − f 00 (xj+1 ).
hj + hj+1 hj + hj+1

42
Taylor’s formula at xj then gives

h2j+1 000 h3j+1 (4)


 
1 hj+1 00
rj = 6 f 0 (xj ) + f (xj ) + f (xj ) + f (τ1 )
hj + hj+1 2 6 24
h2j 000 h3j (4)

0 hj 00
−f (xj ) + f (xj ) − f (xj ) + f (τ2 )
2 6 24
2
hj (4)
 
00 000
−hj f (xj ) − hj f (xj ) + f (τ3 )
2
00
−2f (xj )(hj + hj+1 )
h2j+1 (4)
 
00 000
−hj+1 f (xj ) + hj+1 f (xj ) + f (τ4 )
2
 3
hj+1 (4) h3j (4) h3j (4) h3j+1 (4)

1
= f (τ1 ) + f (τ2 ) − f (τ3 ) − f (τ4 ) .
hj + hj+1 4 4 2 2

Here τ1 , . . . , τ4 ∈ [xj−1 , xj+1 ]. Therefore


3 1 3
|rj | ≤ L [h3j+1 + h3j ] ≤ Lk∆k2 .
4 hj + hj+1 4

for j = 1, 2, . . . , n − 1. In sum,
3
krk∞ ≤ Lk∆k2
4
and since r = A(M − F), it follows that kM − Fk ≤ krk. 
4 (4)
Theorem. Suppose f ∈ C [a, b] and |f (x)| ≤ L for x ∈ [a, b]. Let ∆
be a partition ∆ = {a = x0 < · · · < xn = b} of the interval [a, b], and K a
constant such that
k∆k
≤K for j = 0, . . . , n − 1.
|xj+1 − xj |

If S∆ is the spline function which interpolates the values of the function f at


0
the knots x0 , . . . , xn ∈ ∆ and satisfies S∆ (x) = f 0 (x) for x = a, b, then there
exist constants Ck ≤ 2, which do not depend on the partition ∆, such that
for x ∈ [a, b],
(k)
|f (k) (x) − S∆ (x)| ≤ Ck LKk∆k4−k , k = 0, 1, 2, 3.

43
Note that the constant K ≥ 1 bounds the deviation of the partition ∆ from
uniformity.
Proof: We prove the theorem first for k = 3. For x ∈ [xj−1 , xj ],

000 Mj − Mj−1
S∆ (x) − f 000 (x) = − f 000 (x)
hj
Mj − f 00 (xj ) Mj−1 − f 00 (xj−1 )
= −
hj hj
f (xj ) − f (x) − [f (xj−1 ) − f 00 (x)]
00 00 00
+ − f 000 (x).
hj

Using the previous proposition and Taylor’s theorem at x, we conclude that

000 3 k∆k2 1 (xj − x)2 (4)


|S∆ (x) − f 000 (x)| ≤ L + (xj − x)f 000 (x) + f (η1 )
2 hj hj 2
(xj−1 − x)2 (4)
−(xj−1 − x)f 000 (x) − f (η2 ) − hj f 000 (x)
2
2 2
3 k∆k L k∆k
≤ L + , η1 , η2 ∈ [xj−1 , xj ]
2 hj 2 hj

By hypothesis, k∆k/hj ≤ K for every j. Thus |f 00 (x) − S∆ 00


(x)| ≤ 2LKk∆k.
To prove the proposition for k = 2, we observe: for each x ∈ (a, b) there
exists a closest knot xj = xj (x), for which |xj (x) − x| ≤ 21 k∆k. From
Z x
00 00 00 00
f (x) − S∆ (x) = f (xj (x)) − S∆ (xj (x)) + (f 000 (t) − S∆ 000
(t))dt,
xj (x)

and since K ≥ 1,
3 1
|f 00 (x) − S∆
00
(x)| ≤ Lk∆k2 + k∆k · 2LKk∆k
4 2
7
≤ LKk∆k2 , x ∈ [a, b].
4
We consider k = 1 next. In addition to the boundary points ξ0 :=
a, ξn+1 := b, there exist, by Rolle’s theorem, n further points ξj ∈ (xj−1 , xj ), j =
1, . . . , n, with
f 0 (ξj ) = S∆
0
(ξj ), j = 0, 1, . . . , n + 1.

44
For any x ∈ [a, b] there exists a closest one of the above points ξj = ξj (x),
for which consequently
|ξj (x) − x| < k∆k.
Thus Z x
0 0
f (x) − S∆ (x) = (f 00 (t) − S∆
00
(t))dt,
ξj (x)

and
7 7
|f 0 (x) − S∆
0
(x)| ≤ LKk∆k2 · k∆k = LKk∆k3 , x ∈ [a, b].
4 4
The case k = 0 remains. Since
Z x
f (x) − S∆ (x) = (f 0 (t) − S∆
0
(t))dt,
xj (x)

it follows from the above result for k = 1 that


7 1 7
|f (x) − S∆ (x)| ≤ LKk∆k3 · k∆k = LKk∆k4 , x ∈ [a, b]. 
4 2 8
1

Clearly, this theorem implies that for sequences


(m) (m)
∆m = {a = x0 < x1 < · · · < x(m)
nm = b}, m = 0, 1, . . . ,

of partitions with ∆m → 0 and

k∆m k
sup (m) (m)
≤ K < +∞,
m,j x
j+1 − xj

the corresponding spline functions S∆m and their first three derivatives con-
verge to f and its corresponding derivatives uniformly on [a, b]. Note that
even the third derivative f 000 is uniformly approximated by S∆000
m
, a usually
discontinuous sequence of step functions.

1
The estimates of the theorem have been improved by Hall and Meyer (1976): |f (k) (x)−
(k)
≤ ck Lk∆k4−k , k = 0, 1, 2, 3, with c0 := 5/384, c1 := 1/24, c2 := 3/8, c3 := (K +
S∆ (x)|
−1
K )/2. Here c0 and c1 are optimal.

45
Software
Interpolation routines in the IMSL Library are based on the book A
Practical Guide to Splines by Carl de Boor (Springer, 1978). CSDEC is a
subroutine for interpolation by cubic splines with user supplied end condi-
tions and CSPER is routine for interpolation by periodic cubic splines. Also
included are routines to calculate cubic splines with minimal oscillations or
preserving concavity, two-dimensional interpolation by bicubic splines, and
interpolation by quasi-Hermite piecewise polynomials.
Numerical Recipes contains a subroutine spline to interpolate with cubic
splines and splie2 to interpolate in two dimensions with bicubic splines.
The MATLAB function SPLINE can be used to calculate interpolating
cubic splines.

46

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy