Application of Differential Equations in Biology
Application of Differential Equations in Biology
Application of Differential Equations in Biology
4 Chebyshev polynomials 21
4.1 Properties of the Chebyshev polynomials . . . . . . . . . . 23
6 Conclusion 45
1
Chapter - 1
1.1 Biography
Pafnuty Lvovich Chebyshev was born on May 4, 1821 in Okatovo,
Russia. He could not walk that well, because he had a physical
handicap. This handicap made him unable to do usual children
things. Soon he found a passion: constructing mechanisms.
He did not only teach at the Saint Petersburg University. From 1852
2
to 1858 he taught practical mechanics at the Alexander Lyceum in
Pushkin, a suburb of Saint Petersburg.
3
1.2 Chebyshev’s interest in approximation
theory
Chebyshev was since his childhood interested in mechanisms. The
theory of mechanisms played in that time an important role, because
of the industrialisation.
5
Chapter - 2
This chapter uses usual concepts of linear algebra, and some basic
definitions that are needed.
Since
6
Theorem 1. The best approximating polynomial p (x) Pn is such
that
where
Let {x1, x2, …, x3} be a basis for the inner product space V. Let
7
for k = 1, 2, …, n – 1.
Then pk is the projection of xk + 1 onto span (u1, u2, …, un) and the set
{u1, u2, … , un } is an orthonormal basis for V.
We start with the basis {1, x, x2} for the inner product space V.
Then,
8
So we have to calculate each inner product
Thus,
In fact, the polynomials that are orthogonal with respect to the inner
product
10
Chapter - 3
Pafnuty Lvovich Chebyshev was thus the first who came up with the
idea of approximating functions in the uniform norm. He asked
himself at that time.
error is minimized?
3.1 Existence
In 1854, Chebyshev found a solution to the problem of best
approximation. He observed the following
11
. If the conclusion of the lemma is
false, then we might suppose that f (x1) – p (x1) = E, for some x1. But
that
with error
12
that f (x1) – p0 = – (f (x2) – p0) = . Suppose d is any other
constant. Then, E = f – d cannot satisfy lemma 1. In fact,
E (x1) = f (x1) – d
E (x2) = f (x2) – d;
alternates between .
Theorem 2. Let f (x) C [a, b], and suppose that p(x) is a best
13
approximation to f (x) out of Pn. Then, there is an alternating set for
f – p consisting of at least n + 2 points.
since if f (x) Pn, then f (x) = p(x) and then there would be no
alternating set.
Hence,
We call an interval with a (+) point a (+) interval, an interval with a (–)
point a (–) interval. It is important to notice that no (+) interval can touch
a (–) interval. Hence, the intervals are separated by an interval
containing a zero for φ.
(+) intervals
14
(–) intervals
............................ ..................
(–1)m – 1 intervals
The (+) intervals and (–) intervals are strictly separated, hence we can
find points z1, ..., zm – 1 N such that
......... .........
Our first claim is that q(x) and f – p have the same sign. This is true,
15
because q(x) has no zeros on the (±) intervals, and thus is of constant
sign. Thus, we have that q > 0 on I1, ..., Ik1, because (zj – x) > 0 on these
intervals. Consequently, q < 0 on Ik1 + 1, ..., Ik2, because (z1 – x) < 0 on
these intervals.
Our next step is to show that q(x) is a better approximation to f(x) than
p(x). We show this for the two cases: x N and x N.
Let x N. Then,
16
So we arrived at a contradiction: we showed that p + λq is a better
approximation to f(x) than p(x), but we have that p(x) is the best
approximation to f(x). Therefore, our assumption m < n + 2 is false, and
hence m ≥ n+2.
3.2 Uniqueness
In this section, we will show that the best approximating
polynomial is unique.
because
Thus,
While
– E ≤ (f – p) (xi), (f – q) (xi) ≤ E.
This means that
Theorem 4. Let f(x) C [a, b], and let p(x) Pn. If f – p has an
alternating set containing n + 2 (or more) points, then p(x) is the
best approximation to f(x) out of Pn.
18
Thus,
Then we have
Thus we have
This means that f(xi) – p(xi) and f(xi) – p(xi) – f(xi) + q(xi) = q(xi) – p(xi)
must have the same sign ( , then a and a – b have the same
sign). Hence, q – p = (f – p) – (f – q) alternates n + 2 (or more) times
in sign, because f – p does too. This means that q – p has at least
n + 1 zeros. Since q – p Pn, we must have q(x) = p(x). This
contradicts the strict inequality, thus we conclude that p(x) is the
best approximation to f(x) out of Pn.
Example 1. Consider the function f(x) = sin (4x) in [–π, π]. Figure
4.1 shows this function together with the best approximating
polynomial p0 = 0.
19
Figure 4.1: Illustration of the function f(x) = sin (4x) with best
approximating polynomial p0 = 0.
20
The polynomial p7 = 0 is not a best approximation, since f – p7
only alternates 8 times in sign and it should alternate at least
n + 2 = 9 times in sign. So in P7 there exists a better
approximating polynomial than p7 = 0.
21
Figure 4.2: The polynomial p(x) = x – is the best approximation
of degree 1 to f(x) = x , because f – p changes sign 3 times.
2
Let f(x) C[a, b], and suppose that q(x) Pn is such that f(xi) –
q(xi) alternates in sign at n + 2 points a ≤ x0 < x1 < · · · < xn+1 ≤ b.
Then
22
Figure 4.3: Illustration of de la Vall´ee Poussin’s theorem for
f(x) = ex and n = 5. Some polynomial r(x) P5 gives an error f – r
for which we can identify n + 2 = 7 points at which f – r changes
sign.
23
So de la Vall´ee Poussin’s theorem gives a nice mechanism for
Thus,
overcome . Hence,
24
of degree n with n + 1 roots is the zero polynomial. Thus, p(x) =
q(x). This contradicts the strict inequality. Hence, there must be at
least one i for which
25
Chapter - 4
Chebyshev polynomials
To show how Chebyshev was able to find the best approximating
polynomial, we first need to know what the so called Chebyshev
polynomials are.
That is,
Tn+1(x) = 2xTn(x) – Tn–1(x). (5.1)
T0(x) = 1
T1(x) = x
T2(x) = 2xT1(x) – T0(x) = 2x2 – 1
T3(x) = 2xT2(x) – T1(x) = 4x3 – 3x
T4(x) = 2xT3(x) – T2(x) = 8x4 – 8x2 + 1
...
Tn+1(x) = 2xTn(x) – Tn–1(x) n≥1
In the next figure, the first five Chebyshev polynomials are shown.
27
Figure 5.1: The first five Chebyshev polynomials.
Proof. Consider
28
and
Suppose n ≠ m. Since
,
we have
Suppose n = m. Then
So we have
29
, for each k = 1, 2, ..., n:
Proof. Let .
Then
, for each k = 0; 1; : : : ; n:
Proof.
Let
We have
30
Since Tn(x) is of degree n, its derivative is of degree n – 1. All zeros
occur at these n – 1 distinct points.
and
for each n ≥ 2.
31
Proof. We derive the monic Chebyshev polynomials by dividing
the Chebyshev polynomials Tn(x) by the leading coefficient 2n–1.
,
for each k = 1, 2, ..., n.
and the extrema of occur at
, with ,
for each n = 0, 1, 2, ..., n.
32
Figure 5.2: The first five monic Chebyshev polynomials.
, for all
33
We want to show that this does not hold. Let . Since
and Pn are both monic polynomials of degree n, Q is a polynomial
of degree at most n – 1.
we get
Tm(x) · Tn(x) = cos(m arc cos(x)) cos(n arc cos(x))
P 9: Tm(Tn(x)) = Tmn(x).
P 10:
35
which is the desired result. These polynomials are thus equal for
Thus,
P 12: .
36
Proof. For , let x = cos(θ). Using the binomial expansion we
get
P 15: .
Proof.
37
.
38
Chapter - 5
How to find the best
approximating polynomial in
the uniform norm
40
Step 5: In the previous steps we found that M2 – E2 and (1 – x2)
41
E(x) = 2–n+1Tn(x).
We can write
43
Combining the results and substituting back we get
Chapter 6
Conclusion
In this thesis, we studied Chebyshev approximation. We found
that Chebyshev was the first to approximate functions in the
uniform norm. The problem he wanted to solve was to represent a
continuous function f(x) on the closed interval [a; b] by an
algebraic polynomial of degree at most n, in such a way that the
formula: .
References
[1] K. G. Steffens. The history of approximation theory.
Birkh¨auser, 2006.
[2] S. J. Leon. Linear Algebra with applications. Pearson, 8th
edition, 2010.
[3] T. Maroˇsevi´c. A choice of norm in discrete
approximation. Mathematical Communications, 1(2):147-
152, 1996.
[4] E. Celledoni. Best approximation in the 2-norm. Norwegian
University of Science and Technology, 2012.
[5] N. L. Carothers. A short course on approximation theory.
Bowling Green State University.
[6] S. De Marchi. Lectures on Multivariate Polynomial
Interpolation. University of Padua, 2015.
[7] M. Embree. Oscillation Theorem. Virginia Tech, 2016.
[8] E. S¨uli and D. F. Mayers. An introduction to numerical
analysis. Cambridge University Pess, 2003.
[9] R. L. Burden and J. Douglas Faires. Numerical Analysis.
Brooks Cole, 9th international edition, 2011.
[10] S. Ghorai. Best Approximation: Minimax Theory.
University of Calcutta, 2014.
[11] D. Levy. Introduction to numerical analysis 1. University
of Maryland, 2011.
[12] L. Fox and I. B. Parker. Chebyshev Polynomials in
Numerical Analysis. Oxford University Press, 1968.
[13] C. de Boor. Approximation Theory, Proceedings of
47
symposia in applied mathematics, volume 36. American
Mathematical Society, 1986.
[14] G. G. Lorentz. Approximation of Functions. Holt, Rinehart
and Winston, 1966.
...
48