Jacobi Iterative Solution of Poisson's Equation in 1D
Jacobi Iterative Solution of Poisson's Equation in 1D
Jacobi Iterative Solution of Poisson's Equation in 1D
John Burkardt
Department of Scientific Computing
Florida State University
.....
http://people.sc.fsu.edu/∼jburkardt/presentations/jacobi poisson 1d.pdf
November 8, 2011
Abstract
This document investigates the use of a Jacobi iterative solver to compute approximate solutions
to a discretization of Poisson’s equation in 1D. The document is intended as a record and guide for a
particular investigation into this problem. Therefore, we specify a particular set of data that represents
an instance of the Poisson equation; we discuss the form of a discretization of the equation which results
in a linear system; we consider a specific implementation of the Jacobi iterative method that was used
to solve the linear system; we then consider the convergence behavior of the iterative method as the size
of the grid increases and look for an alternative solution procedure that will give us the answer more
efficiently. The expectation is that the multigrid method will enable us to solve the 1D problem more
quickly, and to proceed to the 2D problems that are of greater interest.
u(a) =ua
u(b) =ub
for some given values ua and ub . It is also possible to replace one of the Dirichlet conditions by a Neumann
condition involving the value of the first derivative, such as, perhaps
du
(a) = pa
dx
although, again for uniqueness reasons, it is generally not possible to replace both Dirichlet boundary con-
ditions by Neumann conditions.
1
Figure 1: The Exact Solution to the Sample Poisson Equation.
In the interest of brevity, from this point in the discussion, the term “Poisson equation” should be
understood to refer exclusively to the Poisson equation over a 1D domain with a pair of Dirichlet boundary
conditions.
nk = 2k + 1, for k = 0, ...
2
Since we assume the nodes are equally spaced between A and B, inclusively, and since we know the
number of nodes in each grid, we can display a formula for the location of the j-th node in the k-th grid:
((nk − j) ∗ A + (j − 1) ∗ B)
xj = , for j = 1, ..., nk ;
nk − 1
In the k-th grid, there will be nk subintervals [xj , xj+1 ]. We assume the nodes are evenly spaced. The
uniform length of the intervals is known as the mesh spacing or mesh size. Using the given rule for the
number of nodes in each grid, the k-th grid will have a mesh spacing
b−a
hk = , for k = 0, ...
2k
The discrete analog to the solution function u(x) is a solution vector uj , with a value given at each
node xj of the mesh. A pair of discrete nodes and values associated in this way is sometimes called a mesh
function. We can plot this solution vector, and as an aid to visualization, connect successive points (xj , uj )
to suggest a continuous curve, but our solution procedure will really only be producing finitely many points.
3
If we multiply all but the first and last equation by h2 , we have:
u1 =ua
−u1 + 2u2 − u3 =h2 f (x2 )
−u2 + 2u3 − u4 =h2 f (x3 )
...
−un−2 + 2un−1 − un =h2 f (xn−1 )
un =ub
and it is easy to see that this constitutes an n by n linear system for the unknowns uj .
This scaling reveals a simple structure to the matrix for any grid index k. However, we will find that we
should not use this scaling when solving a sequence of problems for different grid indices. Were we to use
the scaling, then as the grid index k rises, we are scaling the residuals for later grids by a factor h2k that is
going to zero. This would have the unfortunate effect that a fixed convergence tolerance tol could not be
used for the sequence of linear systems. Therefore, if convenient, we will temporarily suppress the h2k divisor
when discussing the linear system matrix, but in those cases, we will identify the matrix as h2 ∗ A.
By inspection, it is clear that h2 ∗ A is banded and symmetric. The matrix has another important property
that is not immediately obvious: it is positive definite. Note that the matrix retains these properties whether
or not we scale it by h2 . The fact that our system matrix is symmetric and positive definite has important
implications when we look at our options for solving the linear system.
4
One of the simplest iterative solvers for linear systems is known as the Jacobi iteration. The satisfactory
use of an iterative solver generally requires that the linear system satisfy some conditions; in the case of the
Jacobi iteration, it is enough that the matrix A is positive definite and symmetric. The Jacobi iteration is an
easy iteration to implement and study; we will be able to solve small problems with it, but when we begin
to explore larger linear systems, we will see that we will need a more powerful iterative solver.
If x0 was an exact solution, this process would not change any entries.
The update of each element of the new approximate solution is independent of the others. If we decompose
the matrix as A = L+D +U , with L, D and U representing the lower triangle, diagonal and upper triangular
submatrices, we can also write
x1 = D−1 ∗ (f − (L + U ) ∗ x0 )
a concise vectorized format that is very suitable for rapid calculation in a programming language such as
MATLAB.
The Jacobi iteration, obviously, consists of starting with an initial approximation x0 , and repeatedly
applying the Jacobi update, creating a sequence x0 , x1 , x2 , ... which converges to the exact solution.
5
The control that makes sense to apply to the iteration checks the residual, that is, having computed the
j-th iterate xj , we define the residual rj by: defined by
r j = A ∗ xj − f
||r j ||
and control error using the RMS norm √ .
n
The advantage of looking at the residual error is that you guarantee that if your residual norm is small,
then your approximate solution satisfies the equations, on average, to the given tolerance. Notice that this
does not say that our approximate solution is actually that close to the true solution, or even that it is close
at all to the true solution. However, as long as we don’t actually know the true solution, monitoring the
residual is the proper way to control and terminate an iteration.
10 A Sample Calculation
We can examine the results of our discretization and interative approximations for the sample problem with
a grid index k = 5, resulting in nk =33 points. Our Jacobi iteration will use an RMS residual tolerance of
0.000001. We compare the exact solution to the continuous problem against the solution to the discretized
problem computed directly and with the Jacobi iteration.
The closeness of the results suggest that the spatial discretization is fine enough that we are getting good
approximations to the exact solution of the continuous problem. Moreover, the approximations produced
by the Jacobi iteration are very close to those computed directly. On the other hand, this iteration required
3,088 steps, which might seem a surprisingly high cost. We will consider this question in more detail shortly.
I X U_Exact U_Direct U_Jacobi
1 0.0000 -0 -1.402e-15 0
2 0.0312 -0.03123 -0.03121 -0.03121
3 0.0625 -0.06237 -0.06233 -0.06233
4 0.0938 -0.09331 -0.09325 -0.09325
5 0.1250 -0.1239 -0.1239 -0.1239
6 0.1562 -0.1541 -0.154 -0.154
7 0.1875 -0.1838 -0.1837 -0.1837
8 0.2188 -0.2127 -0.2126 -0.2126
9 0.2500 -0.2408 -0.2406 -0.2406
10 0.2812 -0.2678 -0.2677 -0.2677
11 0.3125 -0.2937 -0.2935 -0.2935
12 0.3438 -0.3181 -0.318 -0.318
13 0.3750 -0.341 -0.3408 -0.3408
14 0.4062 -0.3621 -0.3619 -0.3619
15 0.4375 -0.3812 -0.381 -0.381
16 0.4688 -0.3979 -0.3977 -0.3977
17 0.5000 -0.4122 -0.412 -0.412
18 0.5312 -0.4236 -0.4234 -0.4234
19 0.5625 -0.4319 -0.4317 -0.4317
20 0.5938 -0.4368 -0.4366 -0.4366
21 0.6250 -0.4379 -0.4377 -0.4377
22 0.6562 -0.4348 -0.4346 -0.4346
23 0.6875 -0.4273 -0.4271 -0.4271
24 0.7188 -0.4148 -0.4146 -0.4146
25 0.7500 -0.3969 -0.3968 -0.3968
6
26 0.7812 -0.3733 -0.3731 -0.3731
27 0.8125 -0.3433 -0.3432 -0.3432
28 0.8438 -0.3065 -0.3064 -0.3064
29 0.8750 -0.2624 -0.2623 -0.2623
30 0.9062 -0.2103 -0.2102 -0.2102
31 0.9375 -0.1496 -0.1496 -0.1496
32 0.9688 -0.07976 -0.07973 -0.07973
33 1.0000 0 0 0
ck ∼ o(n3k )
For such a cost growth rate, the algorithm will quickly become infeasible with increasing n. What is
worse, we are so far only considering a 1D problem, which we expect might have more tractable cost growth
7
rates than the higher dimensional cases we are also interested in. We clearly need to reconsider the use of
an iterative solution of the linear system by Jacobi’s method, if we hope to solve problems on a fine grid, or
extend this approach to higher-dimensional geometries, or cases in which a nonlinearity means that we need
to solve many linear systems efficiently.
12 Sample Program
We include here a sample MATLAB program which was used to carry out the convergence study for the
example problem. The program takes as input the value k, which is the grid index, defines a problem of the
corresponding size, and runs the Jacobi iteration until the convergence tolerance is achieved.
A copy of the program is available at
http://people.sc.fsu.edu/∼jburkardt/m src/jacobi poisson 1d/jacobi poisson 1d.m
function jacobi_poisson_1d ( k )
%*****************************************************************************80
%
%% JACOBI_POISSON_1D uses Jacobi iteration for the 1D Poisson equation.
%
% Parameters:
%
% Commandline input, integer K, the grid index.
% K specifies the number of nodes, by the formula NK = 2^K + 1.
%
}}
fprintf ( 1, ’\n’ );
fprintf ( 1, ’JACOBI_POISSON_1D:\n’ );
fprintf ( 1, ’ Use Jacobi iteration for the 1D Poisson equation.\n’ );
%
% Set boundaries.
%
a = 0.0;
b = 1.0;
%
% Set boundary conditions.
%
ua = 0.0;
ub = 0.0;
%
% Get NK.
%
nk = 2^k + 1;
%
% Set XK.
%
xk = ( linspace ( a, b, nk ) )’;
%
% Get HK.
%
hk = ( b - a ) / ( nk - 1 );
8
%
% Set FK.
%
fk = force ( xk );
fk(1) = ua;
fk(nk) = ub;
%
% Set the -1/2/-1 entries of A.
%
% In order that the operator A approximation the Poisson operator,
% and in order that we can compare linear systems for successive grids,
% we should NOT multiply through by hk^2.
%
% Though it is tempting to try to "normalize" the matrix A, the
% unintended result is to scale our right hand side a multiplicative
% factor of hk^2, which means that we make it easier and easier to
% satisfy the RMS residual tolerance, as NK increases, with solution
% vectors that are actually worse and worse.
%
sup = sparse ( 2:nk-1, 3:nk, -1.0, nk, nk );
diag = sparse ( 2:nk-1, 2:nk-1, 2.0, nk, nk );
sub = sparse ( 2:nk-1, 1:nk-2, -1.0, nk, nk );
A = ( sup + diag + sub ) / hk^2;
A(1,1) = 1.0;
A(nk,nk) = 1.0;
%
% Just because we can, ask MATLAB to get the exact solution of the linear system
% directly.
%
udk = A \ fk;
%
% Sample the solution to the continuous problem.
%
uek = exact ( xk );
%
% Use Jacobi iteration to solve the linear system to the given tolerance.
%
ujk = zeros ( nk, 1 );
tol = 0.000001;
9
fprintf ( 1, ’ RMS discretization error in Poisson solution = %g\n’, ...
norm ( uek - ujk ) / sqrt ( nk ) );
fprintf ( 1, ’\n’ );
fprintf ( 1, ’ I X U_Exact U_Direct U_Jacobi\n’ );
fprintf ( 1, ’\n’ );
for i = 1 : nk
fprintf ( 1, ’ %4d %10.4f %10.4g %10.4g %10.4g\n’, ...
i, xk(i), uek(i), udk(i), ujk(i) );
end
%
% Terminate.
%
fprintf ( 1, ’\n’ );
fprintf ( 1, ’JACOBI_POISSON_1D:\n’ );
fprintf ( 1, ’ Normal end of execution.\n’ );
return
end
function uex = exact ( x )
%*****************************************************************************80
%
%% UEX evaluates the solution of the continuous problem.
%
uex = x .* ( x - 1 ) .* exp ( x );
return
end
function f = force ( x )
%*****************************************************************************80
%
%% FORCE evaluates the forcing term.
%
f = - x .* ( x + 3 ) .* exp ( x );
return
end
function [ u, it ] = jacobi ( n, A, f, u, tol )
%*****************************************************************************80
%
%% JACOBI carries out the Jacobi iteration.
%
fprintf ( 1, ’\n’ );
fprintf ( 1, ’ Step Residual Change\n’ );
fprintf ( 1, ’\n’ );
it = 0;
10
while ( 1 )
u_old = u;
u = ( f - A * u_old + ( diag ( A ) .* u_old ) ) ./ diag ( A );
r = A * u - f;
it = it + 1;
end
return
end
References
[1] Richard Barrett, Michael Berry, Tony Chan, James Demmel, June Donato, Jack Don-
garra, Victor Eijkhout, Roidan Pozo, Charles Romine, Henk van der Vorst, Templates
for the Solution of Linear Systems: Building Blocks for Iterative Methods, SIAM, 1994.
[2] William Briggs, Van Emden Henson, Steve McCormick, A Multigrid Tutorial, SIAM,
2000.
[3] Howard Elman, Alison Ramage, David Silvester, Finite Elements and Fast Iterative
Solvers with Applications in Incompressible Fluid Dynamics, Oxford, 2005.
11