Mathematics 09 00083
Mathematics 09 00083
Article
A Picard-Type Iterative Scheme for Fredholm Integral
Equations of the Second Kind
José M. Gutiérrez *,† and Miguel Á. Hernández-Verón †
Department of Mathematics and Computer Sciences, University of La Rioja, 26006 Logroño, Spain;
mahernan@unirioja.es
* Correspondence: jmguti@unirioja.es
† These authors contributed equally to this work.
Abstract: In this work, we present an application of Newton’s method for solving nonlinear equations
in Banach spaces to a particular problem: the approximation of the inverse operators that appear
in the solution of Fredholm integral equations. Therefore, we construct an iterative method with
quadratic convergence that does not use either derivatives or inverse operators. Consequently, this
new procedure is especially useful for solving non-homogeneous Fredholm integral equations of
the first kind. We combine this method with a technique to find the solution of Fredholm integral
equations with separable kernels to obtain a procedure that allows us to approach the solution when
the kernel is non-separable.
1. Introduction
In general, an equation in which the unknown function is under a sign of integration
is called an integral equation [1–3]. Both linear and nonlinear integral equations appear
Citation: Gutiérrez, J.M.; in numerous fields of science and engineering [4–6], because many physical processes
Hernández-Verón, M.Á. A and mathematical models can be described by them, so that these equations provide an
Picard-Type Iterative Scheme for important tool for modeling processes [7]. In particular, integral equations arise in fluid
Fredholm Integral Equations of the mechanics, biological models, solid state physics, kinetics chemistry, etc. In addition, many
Second Kind. Mathematics 2021, 9, 83. initial and boundary value problems can be easily turned into integral equations.
https://dx.doi.org/10.3390/ The definition of an integral equation given previously is very general, so in this work,
math9010083 we focus on some particular integral equations that are widely applied, such as Fredholm
integral equations [8]. This kind of integral equation appears frequently in mathematical
Received: 10 December 2020
physics, engineering, and mathematics [9].
Accepted: 28 December 2020
Fredholm integral equations of the first kind have the form:
Published: 1 January 2021
Z b
Publisher’s Note: MDPI stays neu- f (x) = N ( x, t)y(t) dt, x ∈ [ a, b];
a
tral with regard to jurisdictional clai-
ms in published maps and institutio- and those of the second kind can be written as:
nal affiliations.
Z b
y( x ) = f ( x ) + λ N ( x, t)y(t) dt, x ∈ [ a, b], λ ∈ R. (1)
a
Copyright: © 2021 by the authors. Li- In both cases, −∞ < a < b < +∞, λ ∈ R, f ( x ) ∈ C[ a, b] is a given function and
censee MDPI, Basel, Switzerland. N ( x, t), defined in [ a, b] × [ a, b], is called the kernel of the integral equation. y( x ) ∈ C[ a, b]
This article is an open access article is the unknown function to be determined. The integral equation is said to be homogeneous
distributed under the terms and con- when f ( x ) is the zero function and not homogeneous in other cases.
ditions of the Creative Commons At- For the integral Equation (1), we can consider the operator N : C[ a, b] → C[ a, b], given
tribution (CC BY) license (https:// by:
creativecommons.org/licenses/by/
Z b
4.0/). [N (y)]( x ) = N ( x, t)y(t) dt, x ∈ [ a, b].
a
If the operator F defined in (2) is a contraction, the Banach fixed point theorem [10],
guarantees the existence of a unique fixed point of F in C[ a, b]. In addition, this fixed point
can be approximated by the iterative scheme:
where G(y) = ( I − F )(y). Obviously, a fixed point of P is a fixed point of F and vice
versa. Moreover, both methods provide the same iterations, since P (y) = y − G(y) =
y − ( I − F )(y) = F (y).
It is clear that the Banach fixed point theorem does not allow us to locate a fixed point
in a domain that is not all the space C[ a, b]. Both the successive approximation method
and Picard’s method do not need either inverse operators or derivative operators. As a
consequence, they only reach a linear order of convergence. Our aim in this paper is to
construct iterative processes with a quadratic order of convergence, but without using
inverse operators or derivative operators. In addition, we obtain results that allow us to
locate the fixed point in a subset of C[ a, b].
Notice that Equation (1) can be expressed as:
(I − λN )y( x ) = f ( x ). (4)
y∗ ( x ) = (I − λN )−1 f ( x ). (5)
From a theoretical point of view, Formula (5) gives the exact solution of Equation (1)
or (4). However, for practical purposes, the calculus of the inverse (I − λN )−1 could be
not possible or very complicated. For this reason, we propose the use of iterative methods
to approach this inverse and therefore the solution of the integral Equation (1).
In Section 2, we use Newton’s method for this purpose to obtain a method with
quadratic convergence for calculating inverse operators. In addition, we describe two
procedures for approaching the solution of an integral Equation (1), one for separable
kernels and another one for non-separable kernels. The approximated solutions obtained
by the methods in Section 2 can be used as initial values for the iterative method explained
in Section 3 for finding solutions of integral equations. Actually, by combining the two
techniques given in these two sections, we can obtain, with a low number of steps, good
approximations of the integral Equation (1), especially in the non-separable case. We
illustrate the theoretical results in Section 3, with some numerical examples.
where L(C[ a, b], C[ a, b]) is the set of bounded linear operators from the Banach space C[ a, b]
into the Banach space C[ a, b].
Actually, in this section, we use Newton’s method to approach the inverse of a given
linear operator A ∈ GL(C[ a, b], C[ a, b]), namely for solving:
T ( H ) = 0, where T ( H ) = H −1 − A.
Therefore, the Newton iteration in this case can be written in the following way:
(
H0 ∈ L(C[ a, b], C[ a, b]) given,
Hm+1 = Hm − [T 0 ( Hm )]−1 T ( Hm ), m ≥ 0,
T 0 ( Hm )( Hm+1 − Hm ) = −T ( Hm ), m ≥ 0.
Mathematics 2021, 9, 83 4 of 15
1
0<ε< ,
k Lkk H −1 k
we have kI − H −1 ( H + εL)k < 1 for L ∈ GL(C[ a, b], C[ a, b]). Therefore, it is known that
H + εL ∈ GL(C[ a, b], C[ a, b]), and then:
1
T 0 ( H ) L = lim [T ( H + εL) − T ( H )] = − H −1 LH −1
ε →0 ε
(see [17]). Note that T 0 ( H ) is intended as a Gateaux derivative, but since T is locally
Lipschitz in GL(C[ a, b], C[ a, b]) and T 0 ( H ) : GL(C[ a, b], C[ a, b]) → GL(C[ a, b], C[ a, b]) is
linear, then T is Fréchet derivable.
As a consequence, Newton’s method is now given by the following algorithm:
(
H0 ∈ L(C[ a, b], C[ a, b]) given,
(6)
Hm+1 = 2Hm − Hm AHm , m ≥ 0.
Observe that in this case, Newton’s method does not use inverse operators for approx-
imating the inverse operator A−1 = ( I − λN )−1 .
Now, we prove a local convergence result for the sequence (6). To do this, we suppose
that A−1 exists, or equivalently, that |λ|kKk < 1. Therefore, we obtain the following result.
In the rest of this paper, we use the following notation for open and closed balls
centered at a point x0 that belongs to a given Banach space X and with radius R:
Proof. Taking into account the definition of the sequence (6), we have:
θ2 θ
k Hm+1 − A−1 k ≤ k( Hm+1 A − I) A−1 k ≤ k Hm − A−1 k2 k Ak < < ,
k Ak k Ak
θ
so Hm+1 ∈ B A −1 , for all m ≥ 0.
k Ak
We can apply (7) to obtain:
and, therefore:
k Hm+1 − A−1 k ≤ k Akk Hm − A−1 k2 .
Mathematics 2021, 9, 83 5 of 15
Consequently,
m m−1 +···+2+1)
k Hm − A−1 k ≤ k Hm−1 − A−1 k2 k Ak ≤ k H0 − A−1 k2 k Ak(2 , (8)
I − Hm A = (I − Hm−1 A)2 ,
and therefore:
m m
kI − Hm Ak ≤ kI − Hm−1 Ak2 ≤ · · · ≤ kI − H0 Ak2 ≤ δ2 . (9)
Now, by applying the previous inequality recursively and taking into account (9), we
obtain:
m −1
∏ (1 + δ2 )k H0 k < (1 + δ)m k H0 k.
j
k Hm k ≤ (10)
j =0
Consequently, by the definition of the sequence (6), (9), and (10), we have for k ∈ N:
k H0 k
k Hk − H0 k < .
1 − δ (1 + δ )
Mathematics 2021, 9, 83 6 of 15
kH k
Then, Hk ∈ B H0 , 1−δ(10+δ) for k ≥ 1. Moreover, we obtain:
Notice that, if we prove that A−1 exists, then H ∗ = A−1 . On the other hand, if we do
not suppose that A−1 exists, if we consider H0 such that H0 A = AH0 , we have:
m
(I − λN )y∗ ( x ) = y∗ ( x ) − λ ∑ α j ( x ) A j = f ( x ),
j =1
and:
m
(I − λN )−1 f ( x ) = y∗ ( x ) = f ( x ) + λ ∑ α j ( x ) A j . (12)
j =1
Now, if we denote:
Z b Z b
aij = β i ( x )α j ( x ) dx and bi = β i ( x ) f ( x ) dx,
a a
1
Then, we assume is not an eigenvalue of the matrix ( aij ). Thus, if A1 , A2 , . . . , Am is
λ
the solution of system (13), we can obtain directly the solution:
m
y∗ ( x ) = (I − λN )−1 f ( x ) = f ( x ) + λ ∑ α j ( x ) A j . (14)
j =1
From a practical point of view, we can solve the systems defined in (13) by using
classical techniques, such as LUdecomposition or iterative methods for solving linear
systems. We can also use any kind of specific scientific software for this purpose. Notice
that System (13) depends directly on the integration needed for computing the coefficients
aij and bi . In the case of the impossibility of analytical integration, a numerical formula of
integration must be used.
N ( x, t) = N
e ( x, t) + R( x, t), (15)
f)−1
H0 = (I − λN (17)
kI − H0 Ak ≤ |λ|k H0 kkN − N
fk ≤ |λ|k H0 kk Rk(b − a),
where: Z b
k Rk = max | R( x, t)| dt
x ∈[ a,b] a
and k H0 k is the norm induced by the max-norm in the space L(C[ a, b], C[ a, b]).
Consequently, if the error R in (15) is small enough, H0 could be considered as a good
starting point for the iterative process (6). If, for example, N ( x, t) is sufficiently derivable
Mathematics 2021, 9, 83 8 of 15
in some argument, we can apply the Taylor series to calculate the approximation given
in (15), and then, the error made by the Taylor series will allow us to establish how much
kRk approaches zero. Improving this approach will depend, in general, on the number of
Taylor development terms.
Now, we compute m steps in Newton’s method (6) for approximating A−1 = (I −
−
λN ) 1 . Therefore, once Hm is calculated, we consider Hm f ( x ) as an approximated solution
of the Fredholm integral Equation (1). This is the main idea developed in the next section.
(I − λN )y∗ ( x ) = f ( x ),
H ∗ (I − λN )y∗ ( x ) = H ∗ f ( x ),
y ∗ ( x ) = H ∗ f ( x ).
Therefore, for approximating the solution y∗ of Equation (1), we can consider the
following iterative scheme, for A = I − λN :
From the previous theorems, it is easy to prove the local and semilocal convergence of
the iterative scheme (18).
Theorem 3. Suppose that there exists A−1 . Given H0 ∈ L(C[ a, b], C[ a, b]) such that H0 ∈
k k
θ θ f
B A −1 , , with θ ∈ (0, 1), then, for each y0 ∈ B y∗ , , the sequence {ym } defined
k Ak k Ak
θk f k
by (18) belongs to B y∗ , and converges quadratically to y∗ , the solution of Equation (1).
k Ak
In addition,
m
kym − y∗ k ≤ θ 2 −1 k H0 − A−1 kk f k.
θ2 θ
k H1 − A−1 k = k( H1 A − I) A−1 k ≤ k H0 − A−1 k2 k Ak < < ,
k Ak k Ak
and:
θk f k
ky1 − y∗ k = k( H1 − A−1 ) f k ≤ .
k Ak
θk f k
Now, by a mathematical inductive procedure, it is easy to prove that ym ∈ B y∗ ,
k Ak
for n ∈ N.
On the other hand, taking into account (8), we have:
m −1
kym − y∗ k ≤ k Hm − A−1 kk f k ≤ θ 2 k H0 − A−1 kk f k.
Now, to prove a semilocal convergence result for the iterative scheme (18), we will not
assume the existence of A−1 .
√ !
5−1
Theorem 4. Let H0 ∈ L(C[ a, b], C[ a, b]) such that kI − H0 Ak ≤ δ, with δ ∈ 0, .
2
2k H0 kk f k
Then, the sequence {ym } defined by (18) belongs to B y0 , and converges quadrati-
1 − δ (1 + δ )
∗
cally to y , the solution of Equation (1).
Proof. If we consider the sequence {ym } given by (18), from (11), we have:
and we obtain that {ym } is a Cauchy sequence. Then, {ym } converges to ye.
2k H0 kk f k
Moreover, taking m = 0 in (19), it follows that {ym } ⊆ B y0 , .
1 − δ (1 + δ )
Notice that ye is the solution of Equation (1) if we verify that Ae y( x ) = f ( x ). Then,
from Theorem 2, the sequence { Hm } converges to H ∗ with H ∗ A = I and ye( x ) = H ∗ f ( x ),
so y∗ ( x ) = H ∗ Ay∗ ( x ) = H ∗ f ( x ) = ye( x ). Therefore, it follows that ye( x ) = y∗ ( x ) is the
solution of Equation (1).
We would like to indicate that this result allows us to locate the solution of Equation (1)
in the closed ball:
2k H0 kk f k
B y0 , .
1 − δ (1 + δ )
4. Examples
We illustrate the theoretical results obtained in the previous sections with some
examples. Firstly, we examine a case with a separable kernel. In this case, the technique
developed in Section 2.1 can be applied.
3 π π
f (x) = 1 − cos(πx ) − sin(πx ), λ=−
4 16 8
and the separable kernel:
2
We have b1 = −3/8, b2 = π − π
32 and:
0 1/2
( aij ) = .
1/2 0
Therefore, the solution of the linear system (13) is A1 = −1/2, A2 = 2/π, and by (14), we
obtain the solution of the integral Equation (20):
y∗ ( x ) = f ( x ) + λ( A1 α1 ( x ) + A2 α2 ( x )) = 1 − cos(πx ).
e ( x, t) = x − 1 π 2 t4 x3 + 1 π 4 t8 x5 − 1 π 6 t12 x7 .
N (22)
2 4! 6!
Consequently,
2
e ( x, t) + R(θ, x, t), with R(θ, x, t) = sin(πxθ ) x8 t13 ,
N ( x, t) = N θ ∈ (0, t).
7!
Then, we consider the linear Fredholm integral equation:
Z 1
4πx − sin(πx ) 1
1 1 1
y( x ) = + x − π 2 t4 x3 + π 4 t8 x5 − π 6 t12 x7 y(t)dt, (23)
2 2 0 2 4! 6!
Obviously, the difference between the solutions of Equations (21) and (23) will be given
depending on the remaining R(θ, x, t). Depending on the number of terms considered to be the
development of N ( x, t), the rest will be further reduced, and therefore, the solutions of (21) and (23)
are nearby. Note that, moreover, Equation (23) has a separable kernel, and we can obtain its exact
solution by following the procedure shown in Section 2.1.
We consider the real functions:
α1 ( x ) = x, α2 ( x ) = x3 , α3 ( x ) = x5 , α4 ( x ) = x7 ,
1 2 4 1 1
β 1 (t) = 1, β 2 (t) = π t β 3 (t) = π 4 t8 β 4 (t) = − π 6 t12 .
2 4! 6!
Now, we can obtain y0 ( x ) = H0 f ( x ), with H0 = ( I − λN f)−1 , as an approximated solution
to our problem. We follow the steps indicated in Section 2.1 to get y0 ( x ) = H0 f ( x ). In this problem,
we have b1 = 2.8233, b2 = −4.9502, b3 = 2.4843, b4 = −0.5882, and:
1/2 1/4 1/6 1/8
−π 2 /12 −π 2 /16 −π 2 /20 −π 2 /24
( aij ) =
π 4 /240
.
π 4 /288 π 4 /336 π 4 /384
−π 6 /10080 −π 6 /11520 −π 6 /12960 −π 6 /14400
Mathematics 2021, 9, 83 11 of 15
y0 ( x ) = f ( x ) + λ( A1 α1 ( x ) + A2 α2 ( x ) + A3 α3 ( x ) + A4 α4 ( x ))
Figure 1. On the left, the graphics of the first approximation to the solution of the integral
Equation (21) and the exact solution y∗ ( x ) = 2πx. On the right, the graphics of the corresponding
error.
Now, we contemplate a problem with a non-separable kernel. In this case, the tech-
nique developed in Section 2.2 and the algorithm given in (18) can be used.
2x2 − 1 2 x
Z 1
1
y( x ) = + e ( x − 1) + x3 e xt y(t) dt. (24)
3 3 3 0
3
x i +3 t i 4
x i +2 t i −1
e ( x, t) =
N ∑ i!
=∑
( i − 1) !
, (25)
i =0 i =1
and then:
4
e xθ 7 4
N ( x, t) = N
e ( x, t) + R(θ, x, t), with N
e ( x, t) = ∑ αi (x) βi (t) and R(θ, x, t) = 4!
x t ,
i =1
α1 ( x ) = x 3 , α2 ( x ) = x 4 , α3 ( x ) = x 5 , α4 ( x ) = x 6 ,
In our example,
1 e
k I − H0 Ak ≤ |λ|k H0 kk Rk(b − a) ≤ k H0 k.
3 4!
Note that:
1 1 1 8
kλN k <
f 1+1+ + = < 1,
3 2 3! 9
so by the Banach lemma on inverse operators, there exists H0 = ( I − λN f)−1 and k H0 k ≤ 9.
Consequently, √
e e 5−1
k I − H0 Ak ≤ 3 = < .
4! 8 2
Now, we can use the algorithm (18) with this H0 to approximate the solution of our problem.
Actually, we can obtain the initial approach y0 ( x ) = H0 f ( x ) by following the procedure shown
in Section 2.1. In this case, we have b1 = (11 − 6e)/9, b2 = 2(e − 3)/3, b3 = (241 − 90e)/45,
b4 = (264e − 719)/36 and:
1/4 1/5 1/6 1/7
1/5 1/6 1/7 1/8
( aij ) =
1/6
.
1/7 1/8 1/9
1/7 1/8 1/9 1/10
y0 ( x ) = f ( x ) + λ( A1 α1 ( x ) + A2 α2 ( x ) + A3 α3 ( x ) + A4 α4 ( x ))
2
= (−0.0446x6 − 0.07x5 − 0.1288x4 − 0.3377x3 + x2 + e x ( x − 1) − 0.5)
3
Next, to calculate the first approximation y1 ( x ), we use (18) to get:
y1 ( x ) = 2y0 ( x ) − H0 y0 ( x ) + λN y0 ( x )
Mathematics 2021, 9, 83 13 of 15
We can compute H0 y0 ( x ) by the same technique described for separable kernels, just by
changing f ( x ) by y0 ( x ), to obtain:
H0 y0 ( x ) = −0.2432x6 − 0.3545x5 − 0.6031x4 − 1.4584x3 + 2x2 + 2e x ( x − 1) − 1 /3.
y1 ( x ) = 2y0 ( x ) − H0 y0 ( x ) + λN
fy0 ( x )
= 0.0063x6 + 0.0071x5 + 0.0081x4 − 0.5011e x/2 x3 − 0.1573x3
+e x −0.009x3 + 2x − 2 + 2x2 − 1 /3.
As we can see in Figure 2, y1 ( x ) improves considerably the initial approach y0 ( x ). On the left
side of Figure 2, we can appreciate that y1 ( x ) practically overlaps the exact solution x2 − 1. On the
right side, we plot the error committed by y0 ( x ) and y1 ( x ) to approach the exact solution, that is
the error functions:
Ei ( x ) = |yi ( x ) − x2 + 1|, (26)
where yi ( x ), i = 0, 1 are the functions defined above by following our procedure. If we consider
more terms in (25), we can obtain a better approximation of the exact solution of (24).
Figure 2. On the left, the graphics of the first two approximations to the solution of the integral
Equation (24) and the exact solution y( x ) = x2 − 1. On the right, the graphics of the corresponding
errors Ei ( x ) defined in (26).
Finally, we compare our procedure with the classical Picard iterative method defined in (3):
Z 1
1
p i +1 ( x ) = f ( x ) + x3 e xt pi (t) dt, p0 ( x ) = f ( x ), i ≥ 0.
3 0
Mathematics 2021, 9, 83 14 of 15
On the left side of Figure 3, we plot the corresponding error committed by Picard’s method.
In this case, the error functions are:
Pi ( x ) = | pi ( x ) − x2 + 1|, i = 1, 2, 3. (27)
On the right side of this figure, we can appreciate that the error committed by only one iteration
of our method is less than the error obtained with three Picard iterations and is more similar to the
error committed by four iterations by Picard’s method.
Figure 3. On the left, errors committed by the first three iterates of Picard’s method (27). On the right,
comparison among E1 ( x ), P3 ( x ), and P4 ( x ).
5. Conclusions
In this work, we consider the numerical solution of Fredholm integral equations of the
second kind. We transform this problem into another one where the key is to approximate
the inverse of a given operator that allows us to solve the integral equation. In this way, we
construct an iterative procedure based on an important characteristic of Newton’s method:
it does not use the inverse when it is applied to the nonlinear problem of calculating the
inverse of an operator (see [14]). With this idea, we obtain a Picard-type iterative method,
with quadratic convergence, that does not use either derivatives or inverse operators. This
iterative method is more efficient and precise than the classical Picard iteration that is
usually used for solving this kind of problem, at least when a discretization procedure is
not used.
We think our method could be considered as a good method for obtaining starting
points as a first approximation of the solution of Fredholm integral equations. Therefore,
we can consider our method as a good predictor method, in a first attempt to obtain a
starting point and next to use another corrector iterative method.
Author Contributions: Investigation, J.M.G. and M.Á.H.-V. All authors have read and agreed to the
published version of the manuscript.
Funding: This research was funded by the Spanish Ministerio de Ciencia, Innovación y Universi-
dades, Grant Number PGC2018-095896-B-C21.
Institutional Review Board Statement: Not applicable.
Informed Consent Statement: Not applicable.
Mathematics 2021, 9, 83 15 of 15
References
1. Ganesh, M.; Joshi, M.C. Numerical solvability of Hammerstein integral equations of mixed type. IMA J. Numer. Anal. 1991, 11,
21–31.
2. Hernández-Verón, M.A.; Martínez, E. On nonlinear Fredholm integral equations with non-differentiable Nemystkii operator.
Math. Methods Appl. Sci. 2020, 43, 7961–7976.
3. Rashidinia J.; Zarebnia, M. New approach for numerical solution of Hammerstein integral equations. Appl. Math. Comput. 2007,
185, 147–154.
4. Argyros, I.K. On a class of nonlinear integral equations arising in neutron transport. Aequ. Math. 1988, 36, 99–111.
5. Bruns, D.D.; Bailey, J.E. Nonlinear feedback control for operating a nonisothermal CSTR near an unstable steady state. Chem. Eng.
Sci. 1977, 32, 257–264.
6. Chandrasekhar, S. Radiative Transfer; Dover: New York, NY, USA, 1960.
7. Argyros, I.K.; Regmi, S. Undergraduate Research at Cameron University on Iterative Procedures in Banach and Other Spaces; Nova
Science Publisher: New York, NY, USA, 2019.
8. Porter, D.; Stirling, D.S.G. Integral Equations; Cambridge University Press: Cambridge, UK, 1990.
9. Davis, H.T. Introduction to Nonlinear Differential and Integral Equations; Dover: New York, NY, USA, 1962.
10. Berinde, V. Iterative Approximation of Fixed Point; Springer: New York, NY, USA, 2005.
11. Adomian, G. Solving Frontier Problems of Physics, The Decomposition Method; Kluwer: Boston, MA, USA, 1994.
12. Wazwaz, A.M. A reliable modification of the Adomian decomposition method. Appl. Math. Comput. 1999, 102, 77–86.
13. He, J.H. Some asymptotic methods for strongly nonlinear equations. Intern. J. Mod. Phys. B. 2006, 20, 1141–1199.
14. Amat, S.; Ezquerro, J.A.; Hernández-Verón, M.A. Approximation of inverse operators by a new family of high-order iterative
methods. Numer. Linear Algebra Appl. 2014, 21, 629–644.
15. Ezquerro, J.A.; Hernández-Verón, M.A. A modification of the convergence conditions for Picard’s iteration. Comp. Appl. Math.
2004, 23, 55–65.
16. Rheinboldt, W.C. Methods for Solving Systems of Nonlinear Equations; SIAM: Philadelphia, PA, USA, 1974.
17. Kantorovich, L.V.; Akilov, G.P. Functional Analysis; Pergamon Press: Oxford, UK, 1982.