0% found this document useful (0 votes)
17 views

Mathematics 09 00083

Uploaded by

Chiru Mukherjee
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
17 views

Mathematics 09 00083

Uploaded by

Chiru Mukherjee
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 15

mathematics

Article
A Picard-Type Iterative Scheme for Fredholm Integral
Equations of the Second Kind
José M. Gutiérrez *,† and Miguel Á. Hernández-Verón †

Department of Mathematics and Computer Sciences, University of La Rioja, 26006 Logroño, Spain;
mahernan@unirioja.es
* Correspondence: jmguti@unirioja.es
† These authors contributed equally to this work.

Abstract: In this work, we present an application of Newton’s method for solving nonlinear equations
in Banach spaces to a particular problem: the approximation of the inverse operators that appear
in the solution of Fredholm integral equations. Therefore, we construct an iterative method with
quadratic convergence that does not use either derivatives or inverse operators. Consequently, this
new procedure is especially useful for solving non-homogeneous Fredholm integral equations of
the first kind. We combine this method with a technique to find the solution of Fredholm integral
equations with separable kernels to obtain a procedure that allows us to approach the solution when
the kernel is non-separable.

Keywords: Fredholm integral equation; Newton’s method; iterative processes

1. Introduction
 In general, an equation in which the unknown function is under a sign of integration

is called an integral equation [1–3]. Both linear and nonlinear integral equations appear
Citation: Gutiérrez, J.M.; in numerous fields of science and engineering [4–6], because many physical processes
Hernández-Verón, M.Á. A and mathematical models can be described by them, so that these equations provide an
Picard-Type Iterative Scheme for important tool for modeling processes [7]. In particular, integral equations arise in fluid
Fredholm Integral Equations of the mechanics, biological models, solid state physics, kinetics chemistry, etc. In addition, many
Second Kind. Mathematics 2021, 9, 83. initial and boundary value problems can be easily turned into integral equations.
https://dx.doi.org/10.3390/ The definition of an integral equation given previously is very general, so in this work,
math9010083 we focus on some particular integral equations that are widely applied, such as Fredholm
integral equations [8]. This kind of integral equation appears frequently in mathematical
Received: 10 December 2020
physics, engineering, and mathematics [9].
Accepted: 28 December 2020
Fredholm integral equations of the first kind have the form:
Published: 1 January 2021
Z b
Publisher’s Note: MDPI stays neu- f (x) = N ( x, t)y(t) dt, x ∈ [ a, b];
a
tral with regard to jurisdictional clai-
ms in published maps and institutio- and those of the second kind can be written as:
nal affiliations.
Z b
y( x ) = f ( x ) + λ N ( x, t)y(t) dt, x ∈ [ a, b], λ ∈ R. (1)
a

Copyright: © 2021 by the authors. Li- In both cases, −∞ < a < b < +∞, λ ∈ R, f ( x ) ∈ C[ a, b] is a given function and
censee MDPI, Basel, Switzerland. N ( x, t), defined in [ a, b] × [ a, b], is called the kernel of the integral equation. y( x ) ∈ C[ a, b]
This article is an open access article is the unknown function to be determined. The integral equation is said to be homogeneous
distributed under the terms and con- when f ( x ) is the zero function and not homogeneous in other cases.
ditions of the Creative Commons At- For the integral Equation (1), we can consider the operator N : C[ a, b] → C[ a, b], given
tribution (CC BY) license (https:// by:
creativecommons.org/licenses/by/
Z b
4.0/). [N (y)]( x ) = N ( x, t)y(t) dt, x ∈ [ a, b].
a

Mathematics 2021, 9, 83. https://doi.org/10.3390/math9010083 https://www.mdpi.com/journal/mathematics


Mathematics 2021, 9, 83 2 of 15

and the operator F : C[ a, b] → C[ a, b] given by:


Z b
[F (y)]( x ) = f ( x ) + λ N ( x, t)y(t) dt = f ( x ) + λ[N (y)]( x ), x ∈ [ a, b], λ ∈ R. (2)
a

If the operator F defined in (2) is a contraction, the Banach fixed point theorem [10],
guarantees the existence of a unique fixed point of F in C[ a, b]. In addition, this fixed point
can be approximated by the iterative scheme:

y0 given in C[ a, b], y n +1 = F ( y n ), n ≥ 0. (3)

The operator F defined in (2) is a contraction in C[ a, b] if there exists α ∈ (0, 1) such


that:
kF (u) − F (v)k < αku − vk, for all u, v ∈ C[ a, b].
If the operator F is derivable in C[ a, b], the condition kF 0 (y)k < 1, for all y ∈ C[ a, b],
implies that F is a contraction. Note that:
Z b
[F 0 (y)u]( x ) = λ N ( x, t)u(t) dt,
a

so the operator F has a unique fixed point y∗ in C[ a, b] if |λ|kN k < 1 with:


Z b
kN k = max | N ( x, t)| dt.
x ∈[ a,b] a

We can find in the literature many techniques to solve, in an exact or approximate


way, Fredholm integral equations. For instance, the direct computation method and the
successive approximations method appear amongst the most used methods for this issue.
Another well-known technique consists of transforming the Fredholm equation into an
equivalent boundary value problem. We can add to this list other recently developed
techniques, such as the Adomian decomposition method [11], the modified decomposition
method [12], or the variational iteration method [13]. The emphasis in this paper will be on
the use of iterative processes rather than proving theoretical concepts of convergence and
existence. The theorems of existence and convergence are important. The concern will be
on the determination of the solution y( x ) of the Fredholm integral equations of the first
kind and the second kind.
In this paper, we present a Newton-type iterative method that shares many properties
of Picard-type iterative methods, namely it is derivative-free and does not use inverse
operators, although preserving the quadratic order of convergence that characterizes
Newton’s method. These features allow us to design an efficient iterative method. Actually,
with a very reduced number of iterations, we can find competitive approximations to the
solution of the involved Fredholm integral equation. This is one of the main targets of our
research: to justify that it is enough to consider a few steps in our iterative procedure to
reach a good approach to the solution.
The new iterative procedure is based on the following idea (see [14]): Newton’s
method gives a sequence that does not use inverses when it is applied to the problem of
approximating the inverse of a given operator.
The iterative scheme (3) is known as the method of successive approximations for
operator F . It converges to the fixed point y∗ for any function y0 ∈ C[ a, b].
It is known that a fixed point approximation can be expressed by different iteration
functions y = H(y) with H : C[ a, b] → C[ a, b]. In our case, instead of (3), we consider the
well-known iterative scheme of Picard [15], which can be written in the form:

y0 given in C[ a, b], yn+1 = P (yn ) = yn − G(yn ), n ≥ 0,


Mathematics 2021, 9, 83 3 of 15

where G(y) = ( I − F )(y). Obviously, a fixed point of P is a fixed point of F and vice
versa. Moreover, both methods provide the same iterations, since P (y) = y − G(y) =
y − ( I − F )(y) = F (y).
It is clear that the Banach fixed point theorem does not allow us to locate a fixed point
in a domain that is not all the space C[ a, b]. Both the successive approximation method
and Picard’s method do not need either inverse operators or derivative operators. As a
consequence, they only reach a linear order of convergence. Our aim in this paper is to
construct iterative processes with a quadratic order of convergence, but without using
inverse operators or derivative operators. In addition, we obtain results that allow us to
locate the fixed point in a subset of C[ a, b].
Notice that Equation (1) can be expressed as:

(I − λN )y( x ) = f ( x ). (4)

Therefore, the solution y∗ ( x ) of Equation (1) is given by:

y∗ ( x ) = (I − λN )−1 f ( x ). (5)

From a theoretical point of view, Formula (5) gives the exact solution of Equation (1)
or (4). However, for practical purposes, the calculus of the inverse (I − λN )−1 could be
not possible or very complicated. For this reason, we propose the use of iterative methods
to approach this inverse and therefore the solution of the integral Equation (1).
In Section 2, we use Newton’s method for this purpose to obtain a method with
quadratic convergence for calculating inverse operators. In addition, we describe two
procedures for approaching the solution of an integral Equation (1), one for separable
kernels and another one for non-separable kernels. The approximated solutions obtained
by the methods in Section 2 can be used as initial values for the iterative method explained
in Section 3 for finding solutions of integral equations. Actually, by combining the two
techniques given in these two sections, we can obtain, with a low number of steps, good
approximations of the integral Equation (1), especially in the non-separable case. We
illustrate the theoretical results in Section 3, with some numerical examples.

2. Newton’s Method for the Calculus of Inverse Operators


As we said, we can calculate the solution of (1) by Formula (5). Therefore, we consider
the problem of the approximation of the inverse of the linear operator A = I − λN by
means of iterative methods for solving nonlinear equations.
To do this, we introduce the set:

GL(C[ a, b], C[ a, b]) = { H ∈ L(C[ a, b], C[ a, b]) : H −1 exists},

where L(C[ a, b], C[ a, b]) is the set of bounded linear operators from the Banach space C[ a, b]
into the Banach space C[ a, b].
Actually, in this section, we use Newton’s method to approach the inverse of a given
linear operator A ∈ GL(C[ a, b], C[ a, b]), namely for solving:

T ( H ) = 0, where T ( H ) = H −1 − A.

Therefore, the Newton iteration in this case can be written in the following way:
(
H0 ∈ L(C[ a, b], C[ a, b]) given,
Hm+1 = Hm − [T 0 ( Hm )]−1 T ( Hm ), m ≥ 0,

or equivalently (to avoid inverse operators as recommended in [16]):

T 0 ( Hm )( Hm+1 − Hm ) = −T ( Hm ), m ≥ 0.
Mathematics 2021, 9, 83 4 of 15

Indeed, to obtain the corresponding algorithm, we only need to compute T 0 ( Hm ).


Therefore, given H ∈ GL(C[ a, b], C[ a, b]), as H −1 exists, if:

1
0<ε< ,
k Lkk H −1 k

we have kI − H −1 ( H + εL)k < 1 for L ∈ GL(C[ a, b], C[ a, b]). Therefore, it is known that
H + εL ∈ GL(C[ a, b], C[ a, b]), and then:

1
T 0 ( H ) L = lim [T ( H + εL) − T ( H )] = − H −1 LH −1
ε →0 ε

(see [17]). Note that T 0 ( H ) is intended as a Gateaux derivative, but since T is locally
Lipschitz in GL(C[ a, b], C[ a, b]) and T 0 ( H ) : GL(C[ a, b], C[ a, b]) → GL(C[ a, b], C[ a, b]) is
linear, then T is Fréchet derivable.
As a consequence, Newton’s method is now given by the following algorithm:
(
H0 ∈ L(C[ a, b], C[ a, b]) given,
(6)
Hm+1 = 2Hm − Hm AHm , m ≥ 0.

Observe that in this case, Newton’s method does not use inverse operators for approx-
imating the inverse operator A−1 = ( I − λN )−1 .
Now, we prove a local convergence result for the sequence (6). To do this, we suppose
that A−1 exists, or equivalently, that |λ|kKk < 1. Therefore, we obtain the following result.
In the rest of this paper, we use the following notation for open and closed balls
centered at a point x0 that belongs to a given Banach space X and with radius R:

B( x0 , R) = { x ∈ X; k x − x0 k < R}, B( x0 , R) = { x ∈ X; k x − x0 k ≤ R}.


 
θ
Theorem 1. Let H0 ∈ B A −1 , with θ ∈ (0, 1). Then, the sequence { Hm } defined by (6)
  k Ak
θ
belongs to B A−1 , and converges quadratically to A−1 . In addition,
k Ak
m −1
k Hm − A−1 k ≤ θ 2 k H0 − A−1 k.

Proof. Taking into account the definition of the sequence (6), we have:

Hm+1 A − I = 2Hm A − Hm AHm A − I = −( Hm A − I)2 = −( Hm − A−1 ) A( Hm − A−1 ) A. (7)


 
− 1 θ
Then, if we apply this equality recursively, for Hm ∈ B A , , we have:
k Ak

θ2 θ
k Hm+1 − A−1 k ≤ k( Hm+1 A − I) A−1 k ≤ k Hm − A−1 k2 k Ak < < ,
k Ak k Ak
 
θ
so Hm+1 ∈ B A −1 , for all m ≥ 0.
k Ak
We can apply (7) to obtain:

Hm+1 − A−1 = ( Hm+1 A − I) A−1 = −( Hm A − I)2 A−1 = −( Hm − A−1 ) A( Hm − A−1 ),

and, therefore:
k Hm+1 − A−1 k ≤ k Akk Hm − A−1 k2 .
Mathematics 2021, 9, 83 5 of 15

Consequently,
m m−1 +···+2+1)
k Hm − A−1 k ≤ k Hm−1 − A−1 k2 k Ak ≤ k H0 − A−1 k2 k Ak(2 , (8)

and therefore, by the hypothesis:


m −1
k Hm − A−1 k ≤ θ 2 k H0 − A−1 k.

Now, second, we want to prove a semilocal convergence result, without assuming


the existence of A−1 .
√ !
5−1
Theorem 2. Let H0 ∈ L(C[ a, b], C[ a, b]) such that kI − H0 Ak ≤ δ, with δ ∈ 0, .
2
 
kH k
Then, the sequence { Hm } defined by (6) belongs to B H0 , 1−δ(10+δ) and converges quadratically
to H ∗ , with H ∗ A = I . Moreover,
m −1
kI − Hm Ak ≤ δ2 kI − H0 Ak.

Proof. A direct application of (7) gives us:

I − Hm A = (I − Hm−1 A)2 ,

and therefore:
m m
kI − Hm Ak ≤ kI − Hm−1 Ak2 ≤ · · · ≤ kI − H0 Ak2 ≤ δ2 . (9)

On the other hand,

k Hm k ≤ k2Hm−1 − Hm−1 AHm−1 k ≤ (1 + kI − Hm−1 Ak)k Hm−1 k.

Now, by applying the previous inequality recursively and taking into account (9), we
obtain:
m −1
∏ (1 + δ2 )k H0 k < (1 + δ)m k H0 k.
j
k Hm k ≤ (10)
j =0

Consequently, by the definition of the sequence (6), (9), and (10), we have for k ∈ N:

k Hm+k − Hm k ≤ k Hm+k − Hm+k−1 k + · · · + k Hm+1 − Hm k


k −1
≤ ∑ kI − Hm+ j Akk Hm+ j k
j =0
!
k −1
≤ ∑δ 2m + j
(1 + δ ) m+ j
k H0 k
j =0
!
k −1
< ∑ [δ(1 + δ)] m+ j
k H0 k.
j =0

Therefore, as δ(1 + δ) < 1, for m = 0, we have:

k H0 k
k Hk − H0 k < .
1 − δ (1 + δ )
Mathematics 2021, 9, 83 6 of 15

 
kH k
Then, Hk ∈ B H0 , 1−δ(10+δ) for k ≥ 1. Moreover, we obtain:

[δ(1 + δ)]m − [δ(1 + δ)]m+k


k Hm+k − Hm k < k H0 k, (11)
1 − δ (1 + δ )

and it follows that { Hm } is a Cauchy sequence. Then, { Hm } converges to H ∗ . Moreover,


as:
m m
kI − Hm Ak ≤ kI − H0 Ak2 ≤ δ2 ,
it follows that limm→∞ (I − Hm A) = 0 and then H ∗ A = I .

Notice that, if we prove that A−1 exists, then H ∗ = A−1 . On the other hand, if we do
not suppose that A−1 exists, if we consider H0 such that H0 A = AH0 , we have:

AH1 = A(2H0 − H0 AH0 ) = 2AH0 − AH0 AH0 = 2H0 A − H0 AH0 A = H1 A.

Therefore, from an inductive procedure, we obtain that Hm A = AHm and then


AH ∗ = I . Therefore, in this case, H ∗ is the inverse operator of A. However, if H0 A 6= AH0 ,
then H ∗ = limm→∞ Hm satisfies only H ∗ A = I , so that the sequence { Hm } converges to
the left inverse of A.
Our aim in the rest of the section is to face the problem of the calculus of the in-
verse of the the linear operator A = I − λN that appears in the solution of Fredholm
integral Equation (1). We distinguish two situations, depending on whether the kernel N
is separable or not. In the first case, an exact solution can be obtained by means of alge-
braic procedures, whereas in the second case, we use the sequence defined by Newton’s
method (6) for approximating A−1 = ( I − λN )−1 .

2.1. Separable Kernels


In the first case, we assume that N ( x, t) is a separable kernel, that is:
m
N ( x, t) = ∑ α i ( x ) β i ( t ).
i =1
Rb
If we denote A j = a β j (t)y∗ (t) dt, we have by (5):

m
(I − λN )y∗ ( x ) = y∗ ( x ) − λ ∑ α j ( x ) A j = f ( x ),
j =1

and:
m
(I − λN )−1 f ( x ) = y∗ ( x ) = f ( x ) + λ ∑ α j ( x ) A j . (12)
j =1

In addition, the integrals A j can be calculated independently of y∗ . To do this, we


multiply the second equality of (12) by β i ( x ), and we integrate in the x variable. Therefore,
we have: Z m b
 Z b
Ai − λ ∑ β i ( x )α j ( x ) ds A j = β i ( x ) f ( x ) dx.
j =1 a a

Now, if we denote:
Z b Z b
aij = β i ( x )α j ( x ) dx and bi = β i ( x ) f ( x ) dx,
a a

we obtain the following linear system of equations:


m
Ai − λ ∑ aij A j = bi , i = 1, . . . , m. (13)
j =1
Mathematics 2021, 9, 83 7 of 15

This system has a unique solution if:


1
a11 − λ a12 a13 ... a1m
1
a21 a22 − a23 ... a2m
(−λ)m
λ
.. .. .. .. .. 6= 0.
. . . . .
1
am1 am2 am3 ... amm − λ

1
Then, we assume is not an eigenvalue of the matrix ( aij ). Thus, if A1 , A2 , . . . , Am is
λ
the solution of system (13), we can obtain directly the solution:
m
y∗ ( x ) = (I − λN )−1 f ( x ) = f ( x ) + λ ∑ α j ( x ) A j . (14)
j =1

From a practical point of view, we can solve the systems defined in (13) by using
classical techniques, such as LUdecomposition or iterative methods for solving linear
systems. We can also use any kind of specific scientific software for this purpose. Notice
that System (13) depends directly on the integration needed for computing the coefficients
aij and bi . In the case of the impossibility of analytical integration, a numerical formula of
integration must be used.

2.2. Non-Separable Kernels


Now, we wonder what happens when the kernel is not separable. With the aim of
reaching a quadratic convergence, we can apply Newton’s method to approximate the
inverse of the operator A = I − λN defined in (5) to obtain the solution y∗ ( x ) of the
Fredholm integral Equation (1).
Our idea is to approximate N ( x, t) by a separable kernel N
e ( x, t), that is:

N ( x, t) = N
e ( x, t) + R( x, t), (15)

e ( x, t) = ∑m αi ( x ) β i (t) and R( x, t) is the error in the approximation.


where N i =1
We consider the operator:
Z b
[N
f(y)]( x ) = e ( x, t)y(t) dt,
N x ∈ [ a, b]. (16)
a

With this operator, we can consider:

f)−1
H0 = (I − λN (17)

as the starting seed in the iterative process (6) to approximate A−1 .


To check if H0 defined in (17) is a good choice, we can apply Theorem 3. We need to
guarantee that: √
5−1
kI − H0 Ak ≤ δ < .
2
In this case, we have:

kI − H0 Ak ≤ |λ|k H0 kkN − N
fk ≤ |λ|k H0 kk Rk(b − a),

where: Z b
k Rk = max | R( x, t)| dt
x ∈[ a,b] a

and k H0 k is the norm induced by the max-norm in the space L(C[ a, b], C[ a, b]).
Consequently, if the error R in (15) is small enough, H0 could be considered as a good
starting point for the iterative process (6). If, for example, N ( x, t) is sufficiently derivable
Mathematics 2021, 9, 83 8 of 15

in some argument, we can apply the Taylor series to calculate the approximation given
in (15), and then, the error made by the Taylor series will allow us to establish how much
kRk approaches zero. Improving this approach will depend, in general, on the number of
Taylor development terms.
Now, we compute m steps in Newton’s method (6) for approximating A−1 = (I −

λN ) 1 . Therefore, once Hm is calculated, we consider Hm f ( x ) as an approximated solution
of the Fredholm integral Equation (1). This is the main idea developed in the next section.

3. A Picard-Type Iterative Scheme from the Newton Method


As we said in the previous section, our target now is to obtain the solution y∗ of
Equation (1). Therefore, as the limit H ∗ of the sequence of linear operators defined in (6)
satisfies H ∗ A = I , with A = I − λN , we have:

(I − λN )y∗ ( x ) = f ( x ),

H ∗ (I − λN )y∗ ( x ) = H ∗ f ( x ),
y ∗ ( x ) = H ∗ f ( x ).
Therefore, for approximating the solution y∗ of Equation (1), we can consider the
following iterative scheme, for A = I − λN :

H0 ∈ L(C[ a, b], C[ a, b]) given,






 y0 ( x ) = H0 f ( x ),

Hm = 2Hm−1 − Hm−1 AHm−1 , m ≥ 0, (18)



 ym ( x ) = Hm f ( x ).

From the previous theorems, it is easy to prove the local and semilocal convergence of
the iterative scheme (18).

Theorem 3. Suppose that there exists A−1 . Given H0 ∈ L(C[ a,  b], C[ a, b]) such that H0 ∈
k k
 
θ θ f
B A −1 , , with θ ∈ (0, 1), then, for each y0 ∈ B y∗ , , the sequence {ym } defined
k Ak k Ak
θk f k
 
by (18) belongs to B y∗ , and converges quadratically to y∗ , the solution of Equation (1).
k Ak
In addition,
m
kym − y∗ k ≤ θ 2 −1 k H0 − A−1 kk f k.

Proof. From Equation (7) for m = 0, we obtain:

θ2 θ
k H1 − A−1 k = k( H1 A − I) A−1 k ≤ k H0 − A−1 k2 k Ak < < ,
k Ak k Ak

and:
θk f k
ky1 − y∗ k = k( H1 − A−1 ) f k ≤ .
k Ak
θk f k
 
Now, by a mathematical inductive procedure, it is easy to prove that ym ∈ B y∗ ,
k Ak
for n ∈ N.
On the other hand, taking into account (8), we have:
m −1
kym − y∗ k ≤ k Hm − A−1 kk f k ≤ θ 2 k H0 − A−1 kk f k.

Then, the result follows directly.


Mathematics 2021, 9, 83 9 of 15

Now, to prove a semilocal convergence result for the iterative scheme (18), we will not
assume the existence of A−1 .
√ !
5−1
Theorem 4. Let H0 ∈ L(C[ a, b], C[ a, b]) such that kI − H0 Ak ≤ δ, with δ ∈ 0, .
2
2k H0 kk f k
 
Then, the sequence {ym } defined by (18) belongs to B y0 , and converges quadrati-
1 − δ (1 + δ )

cally to y , the solution of Equation (1).

Proof. If we consider the sequence {ym } given by (18), from (11), we have:

kym+k − ym k ≤ (k Hm+k − Hm+k−1 k + · · · + k Hm+1 − Hm k)k f k


!
k −1
< 2 ∑ [δ(1 + δ)]m+ j k H0 kk f k,
j =0

then, as in (12), we obtain:

[δ(1 + δ)]m − [δ(1 + δ)]m+k


k ym+k − ym k < 2 k H0 kk f k, (19)
1 − δ (1 + δ )

and we obtain that {ym } is a Cauchy sequence. Then, {ym } converges to ye.
2k H0 kk f k
 
Moreover, taking m = 0 in (19), it follows that {ym } ⊆ B y0 , .
1 − δ (1 + δ )
Notice that ye is the solution of Equation (1) if we verify that Ae y( x ) = f ( x ). Then,
from Theorem 2, the sequence { Hm } converges to H ∗ with H ∗ A = I and ye( x ) = H ∗ f ( x ),
so y∗ ( x ) = H ∗ Ay∗ ( x ) = H ∗ f ( x ) = ye( x ). Therefore, it follows that ye( x ) = y∗ ( x ) is the
solution of Equation (1).

We would like to indicate that this result allows us to locate the solution of Equation (1)
in the closed ball:
2k H0 kk f k
 
B y0 , .
1 − δ (1 + δ )

4. Examples
We illustrate the theoretical results obtained in the previous sections with some
examples. Firstly, we examine a case with a separable kernel. In this case, the technique
developed in Section 2.1 can be applied.

Example 1. We consider the following linear Fredholm integral equation,


Z 1
3 π π
y( x ) = 1 − cos(πx ) − sin(πx ) − sin(π ( x + t))y(t) dt. (20)
4 16 8 0

It is easy to check that y∗ ( x ) = 1 − cos(πx ) is the solution.


We can apply the procedure developed in Section 2.1 with:

3 π π
f (x) = 1 − cos(πx ) − sin(πx ), λ=−
4 16 8
and the separable kernel:

N ( x, t) = sin(π ( x + t)) = sin(πx ) cos(πt) + cos(πx ) sin(πt),

that is α1 ( x ) = sin(πx ), α2 ( x ) = cos(πx ), β 1 (t) = cos(πt), β 2 (t) = sin(πt).


Mathematics 2021, 9, 83 10 of 15

2
We have b1 = −3/8, b2 = π − π
32 and:
 
0 1/2
( aij ) = .
1/2 0

Therefore, the solution of the linear system (13) is A1 = −1/2, A2 = 2/π, and by (14), we
obtain the solution of the integral Equation (20):

y∗ ( x ) = f ( x ) + λ( A1 α1 ( x ) + A2 α2 ( x )) = 1 − cos(πx ).

In the following example, we establish a procedure for approximating the solution of


an integral equation with a non-separable kernel. For this issue, we approximate the given
integral equation by another integral equation with a separable kernel. Next, we find the
exact solution of this last integral equation with the technique developed in Section 2.1.
This exact solution provides us a good approximation of the solution of the original integral
equation with a non-separable kernel.

Example 2. We consider the following Fredholm integral equation,


Z 1
4πx − sin(πx ) 1
y( x ) = + x cos(πxt2 )y(t)dt, (21)
2 2 0

whose exact solution is y∗ ( x ) = 2πx.


The non-separable kernel N ( x, t) = x cos(πxt2 ) can be approached by the separable kernel
N ( x, t) defined by:
e

e ( x, t) = x − 1 π 2 t4 x3 + 1 π 4 t8 x5 − 1 π 6 t12 x7 .
N (22)
2 4! 6!
Consequently,
2
e ( x, t) + R(θ, x, t), with R(θ, x, t) = sin(πxθ ) x8 t13 ,
N ( x, t) = N θ ∈ (0, t).
7!
Then, we consider the linear Fredholm integral equation:
Z 1
4πx − sin(πx ) 1

1 1 1
y( x ) = + x − π 2 t4 x3 + π 4 t8 x5 − π 6 t12 x7 y(t)dt, (23)
2 2 0 2 4! 6!

Obviously, the difference between the solutions of Equations (21) and (23) will be given
depending on the remaining R(θ, x, t). Depending on the number of terms considered to be the
development of N ( x, t), the rest will be further reduced, and therefore, the solutions of (21) and (23)
are nearby. Note that, moreover, Equation (23) has a separable kernel, and we can obtain its exact
solution by following the procedure shown in Section 2.1.
We consider the real functions:

α1 ( x ) = x, α2 ( x ) = x3 , α3 ( x ) = x5 , α4 ( x ) = x7 ,

1 2 4 1 1
β 1 (t) = 1, β 2 (t) = π t β 3 (t) = π 4 t8 β 4 (t) = − π 6 t12 .
2 4! 6!
Now, we can obtain y0 ( x ) = H0 f ( x ), with H0 = ( I − λN f)−1 , as an approximated solution
to our problem. We follow the steps indicated in Section 2.1 to get y0 ( x ) = H0 f ( x ). In this problem,
we have b1 = 2.8233, b2 = −4.9502, b3 = 2.4843, b4 = −0.5882, and:
 
1/2 1/4 1/6 1/8
 −π 2 /12 −π 2 /16 −π 2 /20 −π 2 /24 
( aij ) = 
 π 4 /240
.
π 4 /288 π 4 /336 π 4 /384 
−π 6 /10080 −π 6 /11520 −π 6 /12960 −π 6 /14400
Mathematics 2021, 9, 83 11 of 15

The solution of the linear system (13) is A1 = 3.1379, A2 = −5.1551, A3 = 2.5421,


A4 = −0.5971. Then, by (14), the solution of the integral Equation (23) is:

y0 ( x ) = f ( x ) + λ( A1 α1 ( x ) + A2 α2 ( x ) + A3 α3 ( x ) + A4 α4 ( x ))

= −0.298544x7 + 1.27105x5 − 2.57756x3 + 7.85213x − 0.5 sin(πx ),


which provides us a good approximation of the solution of (21), as we can see in Figure 1. Of course,
if we increase the number of terms in the approximation given in (22), we can improve the approach
y0 of the solution of Equation (21).

Figure 1. On the left, the graphics of the first approximation to the solution of the integral
Equation (21) and the exact solution y∗ ( x ) = 2πx. On the right, the graphics of the corresponding
error.

Now, we contemplate a problem with a non-separable kernel. In this case, the tech-
nique developed in Section 2.2 and the algorithm given in (18) can be used.

Example 3. We consider the following Fredholm integral equation,

2x2 − 1 2 x
Z 1
1
y( x ) = + e ( x − 1) + x3 e xt y(t) dt. (24)
3 3 3 0

It is easy to check that y∗ ( x ) = x2 − 1 is the solution.


2x2 − 1 2 x 1
Obviously, in this case, f ( x ) = + e ( x − 1), λ = , and the kernel N ( x, t) =
3 3 3
x3 e xt is non-separable. Then, for example, there exists θ ∈ (0, t) such that:
m
x i +3 t i e xθ
N ( x, t) = x3 e xt = ∑ i!
+ R(θ, x, t), R(θ, x, t) =
( m + 1) !
x m +4 t m +1 .
i =0
Mathematics 2021, 9, 83 12 of 15

Thus, if we consider m = 3, we have:

3
x i +3 t i 4
x i +2 t i −1
e ( x, t) =
N ∑ i!
=∑
( i − 1) !
, (25)
i =0 i =1

and then:
4
e xθ 7 4
N ( x, t) = N
e ( x, t) + R(θ, x, t), with N
e ( x, t) = ∑ αi (x) βi (t) and R(θ, x, t) = 4!
x t ,
i =1

for the real functions:

α1 ( x ) = x 3 , α2 ( x ) = x 4 , α3 ( x ) = x 5 , α4 ( x ) = x 6 ,

β 1 (t) = 1, β 2 (t) = t, β 3 (t) = t2 β 4 (t) = t3 .


f)−1 , where:
We consider as the initial function in the sequence (6), H0 = ( I − λN
Z b
[N
f(y)]( x ) = e ( x, t)y(t) dt,
N x ∈ [ a, b].
a

In our example,

1 e
k I − H0 Ak ≤ |λ|k H0 kk Rk(b − a) ≤ k H0 k.
3 4!
Note that:  
1 1 1 8
kλN k <
f 1+1+ + = < 1,
3 2 3! 9
so by the Banach lemma on inverse operators, there exists H0 = ( I − λN f)−1 and k H0 k ≤ 9.
Consequently, √
e e 5−1
k I − H0 Ak ≤ 3 = < .
4! 8 2
Now, we can use the algorithm (18) with this H0 to approximate the solution of our problem.
Actually, we can obtain the initial approach y0 ( x ) = H0 f ( x ) by following the procedure shown
in Section 2.1. In this case, we have b1 = (11 − 6e)/9, b2 = 2(e − 3)/3, b3 = (241 − 90e)/45,
b4 = (264e − 719)/36 and:
 
1/4 1/5 1/6 1/7
 1/5 1/6 1/7 1/8 
( aij ) = 
 1/6
.
1/7 1/8 1/9 
1/7 1/8 1/9 1/10

The solution of the linear system (13) is A1 = −0.6754, A2 = −0.2575, A3 = −0.1399,


A4 = −0.0892. Then, by (14), the solution of the integral Equation (20):

y0 ( x ) = f ( x ) + λ( A1 α1 ( x ) + A2 α2 ( x ) + A3 α3 ( x ) + A4 α4 ( x ))

2
= (−0.0446x6 − 0.07x5 − 0.1288x4 − 0.3377x3 + x2 + e x ( x − 1) − 0.5)
3
Next, to calculate the first approximation y1 ( x ), we use (18) to get:

y1 ( x ) = 2y0 ( x ) − H0 y0 ( x ) + λN y0 ( x )
Mathematics 2021, 9, 83 13 of 15

We can compute H0 y0 ( x ) by the same technique described for separable kernels, just by
changing f ( x ) by y0 ( x ), to obtain:
 
H0 y0 ( x ) = −0.2432x6 − 0.3545x5 − 0.6031x4 − 1.4584x3 + 2x2 + 2e x ( x − 1) − 1 /3.

As N is a non-separable kernel, we can approximate N y0 ( x ) by N


fy0 ( x ) and follow the same
procedure in Section 2.1. In this way, we have:
 
fy0 ( x ) = x3 −0.0584x3 − 0.0675x2 − 0.0799x − 0.5011e x/2 − 0.009e x − 0.2648 .
N

Consequently, we obtain the following approximation for y1 ( x ):

y1 ( x ) = 2y0 ( x ) − H0 y0 ( x ) + λN
fy0 ( x )

= 0.0063x6 + 0.0071x5 + 0.0081x4 − 0.5011e x/2 x3 − 0.1573x3
  
+e x −0.009x3 + 2x − 2 + 2x2 − 1 /3.

As we can see in Figure 2, y1 ( x ) improves considerably the initial approach y0 ( x ). On the left
side of Figure 2, we can appreciate that y1 ( x ) practically overlaps the exact solution x2 − 1. On the
right side, we plot the error committed by y0 ( x ) and y1 ( x ) to approach the exact solution, that is
the error functions:
Ei ( x ) = |yi ( x ) − x2 + 1|, (26)
where yi ( x ), i = 0, 1 are the functions defined above by following our procedure. If we consider
more terms in (25), we can obtain a better approximation of the exact solution of (24).

Figure 2. On the left, the graphics of the first two approximations to the solution of the integral
Equation (24) and the exact solution y( x ) = x2 − 1. On the right, the graphics of the corresponding
errors Ei ( x ) defined in (26).

Finally, we compare our procedure with the classical Picard iterative method defined in (3):
Z 1
1
p i +1 ( x ) = f ( x ) + x3 e xt pi (t) dt, p0 ( x ) = f ( x ), i ≥ 0.
3 0
Mathematics 2021, 9, 83 14 of 15

On the left side of Figure 3, we plot the corresponding error committed by Picard’s method.
In this case, the error functions are:

Pi ( x ) = | pi ( x ) − x2 + 1|, i = 1, 2, 3. (27)

On the right side of this figure, we can appreciate that the error committed by only one iteration
of our method is less than the error obtained with three Picard iterations and is more similar to the
error committed by four iterations by Picard’s method.

Figure 3. On the left, errors committed by the first three iterates of Picard’s method (27). On the right,
comparison among E1 ( x ), P3 ( x ), and P4 ( x ).

5. Conclusions
In this work, we consider the numerical solution of Fredholm integral equations of the
second kind. We transform this problem into another one where the key is to approximate
the inverse of a given operator that allows us to solve the integral equation. In this way, we
construct an iterative procedure based on an important characteristic of Newton’s method:
it does not use the inverse when it is applied to the nonlinear problem of calculating the
inverse of an operator (see [14]). With this idea, we obtain a Picard-type iterative method,
with quadratic convergence, that does not use either derivatives or inverse operators. This
iterative method is more efficient and precise than the classical Picard iteration that is
usually used for solving this kind of problem, at least when a discretization procedure is
not used.
We think our method could be considered as a good method for obtaining starting
points as a first approximation of the solution of Fredholm integral equations. Therefore,
we can consider our method as a good predictor method, in a first attempt to obtain a
starting point and next to use another corrector iterative method.

Author Contributions: Investigation, J.M.G. and M.Á.H.-V. All authors have read and agreed to the
published version of the manuscript.
Funding: This research was funded by the Spanish Ministerio de Ciencia, Innovación y Universi-
dades, Grant Number PGC2018-095896-B-C21.
Institutional Review Board Statement: Not applicable.
Informed Consent Statement: Not applicable.
Mathematics 2021, 9, 83 15 of 15

Data Availability Statement: Data is contained within the article.


Conflicts of Interest: The authors declare no conflict of interest.

References
1. Ganesh, M.; Joshi, M.C. Numerical solvability of Hammerstein integral equations of mixed type. IMA J. Numer. Anal. 1991, 11,
21–31.
2. Hernández-Verón, M.A.; Martínez, E. On nonlinear Fredholm integral equations with non-differentiable Nemystkii operator.
Math. Methods Appl. Sci. 2020, 43, 7961–7976.
3. Rashidinia J.; Zarebnia, M. New approach for numerical solution of Hammerstein integral equations. Appl. Math. Comput. 2007,
185, 147–154.
4. Argyros, I.K. On a class of nonlinear integral equations arising in neutron transport. Aequ. Math. 1988, 36, 99–111.
5. Bruns, D.D.; Bailey, J.E. Nonlinear feedback control for operating a nonisothermal CSTR near an unstable steady state. Chem. Eng.
Sci. 1977, 32, 257–264.
6. Chandrasekhar, S. Radiative Transfer; Dover: New York, NY, USA, 1960.
7. Argyros, I.K.; Regmi, S. Undergraduate Research at Cameron University on Iterative Procedures in Banach and Other Spaces; Nova
Science Publisher: New York, NY, USA, 2019.
8. Porter, D.; Stirling, D.S.G. Integral Equations; Cambridge University Press: Cambridge, UK, 1990.
9. Davis, H.T. Introduction to Nonlinear Differential and Integral Equations; Dover: New York, NY, USA, 1962.
10. Berinde, V. Iterative Approximation of Fixed Point; Springer: New York, NY, USA, 2005.
11. Adomian, G. Solving Frontier Problems of Physics, The Decomposition Method; Kluwer: Boston, MA, USA, 1994.
12. Wazwaz, A.M. A reliable modification of the Adomian decomposition method. Appl. Math. Comput. 1999, 102, 77–86.
13. He, J.H. Some asymptotic methods for strongly nonlinear equations. Intern. J. Mod. Phys. B. 2006, 20, 1141–1199.
14. Amat, S.; Ezquerro, J.A.; Hernández-Verón, M.A. Approximation of inverse operators by a new family of high-order iterative
methods. Numer. Linear Algebra Appl. 2014, 21, 629–644.
15. Ezquerro, J.A.; Hernández-Verón, M.A. A modification of the convergence conditions for Picard’s iteration. Comp. Appl. Math.
2004, 23, 55–65.
16. Rheinboldt, W.C. Methods for Solving Systems of Nonlinear Equations; SIAM: Philadelphia, PA, USA, 1974.
17. Kantorovich, L.V.; Akilov, G.P. Functional Analysis; Pergamon Press: Oxford, UK, 1982.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy