Resistor Network Aproach EIT
Resistor Network Aproach EIT
Resistor Network Aproach EIT
L. Borcea
V. Druskin
F. Guevara Vasquez
A.V. Mamonov
Abstract
We review a resistor network approach to the numerical solution of the inverse problem of electrical
impedance tomography (EIT). The networks arise in the context of finite volume discretizations of the
elliptic equation for the electric potential, on sparse and adaptively refined grids that we call optimal.
The name refers to the fact that the grids give spectrally accurate approximations of the Dirichlet to
Neumann map, the data in EIT. The fundamental feature of the optimal grids in inversion is that they
connect the discrete inverse problem for resistor networks to the continuum EIT problem.
Introduction
We consider the inverse problem of electrical impedance tomography (EIT) in two dimensions [11]. It seeks
the scalar valued positive and bounded conductivity (x), the coefficient in the elliptic partial differential
equation for the potential u H 1 (),
[(x)u(x)] = 0,
x .
(1.1)
The domain is a bounded and simply connected set in R2 with smooth boundary B. Because all such
domains are conformally equivalent by the Riemann mapping theorem, we assume throughout that is the
unit disk,
= {x = (r cos , r sin ),
(1.2)
The EIT problem is to determine (x) from measurements of the Dirichlet to Neumann (DtN) map or
equivalently, the Neumann to Dirichlet map . We consider the full boundary setup, with access to the
entire boundary, and the partial measurement setup, where the measurements are confined to an accessible
subset BA of B, and the remainder BI = B \ BA of the boundary is grounded (u|BI = 0).
The DtN map : H 1/2 (B) H 1/2 (B) takes arbitrary boundary potentials uB in the trace space
x B,
(1.3)
where n(x) is the outer normal at x B and u(x) solves (1.1) with Dirichlet boundary conditions
u(x) = uB (x),
x B.
(1.4)
Note that has a null space consisting of constant potentials and thus, it is invertible only on a subset J
J =
J(x)ds(x) = 0 .
(1.5)
Its generalized inverse is the NtD map : J H 1/2 (B), which takes boundary currents JB J to
boundary potentials
JB (x) = u(x),
x B.
(1.6)
x B,
(1.7)
and it is defined up to an additive constant, that can be fixed for example by setting the potential to zero
at one boundary point, as if it were connected to the ground.
It is known that determines uniquely in the full boundary setup [5]. See also the earlier uniqueness
results [56, 18] under some smoothness assumptions on . Uniqueness holds for the partial boundary setup
and > 0, [39]. The case of real-analytic or piecewise real-analytic is
as well, at least for C 3+ ()
However, the problem is exponentially unstable, as shown in [1, 9, 53]. Given two sufficiently regular
conductivities 1 and 2 , the best possible stability estimate is of logarithmic type
k1 2 kL () c log k1 2 kH 1/2 (B),H 1/2 (B) ,
(1.8)
with some positive constants c and . This means that if we have noisy measurements, we cannot expect
the conductivity to be close to the true one uniformly in , unless the noise is exponentially small.
In practice the noise plays a role and the inversion can be carried out only by imposing some regularization constraints on . Moreover, we have finitely many measurements of the DtN map and we seek
numerical approximations of with finitely many degrees of freedom (parameters). The stability of these
approximations depends on the number of parameters and their distribution in the domain .
It is shown in [2] that if is piecewise constant, with a bounded number of unknown values, then the
stability estimates on are no longer of the form (1.8), but they become of Lipschitz type. However, it is
not really understood how the Lipschitz constant depends on the distribution of the unknowns in . Surely,
it must be easier to determine the features of the conductivity near the boundary than deep inside .
Then, the question is how to parametrize the unknown conductivity in numerical inversion so that we
can control its stability and we do not need excessive regularization with artificial penalties that introduce
artifacts in the results. Adaptive parametrizations for EIT have been considered for example in [43, 50] and
[3, 4]. Here we review our inversion approach that is based on resistor networks that arise in finite volume
discretizations of (1.1) on sparse and adaptively refined grids which we call optimal. The name refers to the
2
fact that they give spectral accuracy of approximations of on finite volume grids. One of their important
features is that they are refined near the boundary, where we make the measurements, and coarse away from
it. Thus they capture the expected loss of resolution of the numerical approximations of .
Optimal grids were introduced in [29, 30, 41, 7, 6] for accurate approximations of the DtN map in forward
problems. Having such approximations is important for example in domain decomposition approaches to
solving second order partial differential equations and systems, because the action of a sub-domain can be
replaced by the DtN map on its boundary [61]. In addition, accurate approximations of DtN maps allow
truncations of the computational domain for solving hyperbolic problems. The studies in [29, 30, 41, 7, 6]
work with spectral decompositions of the DtN map, and show that by just placing grid points optimally in
the domain, one can obtain exponential convergence rates of approximations of the DtN map with second
order finite difference schemes. That is to say, although the solution of the forward problem is second order
accurate inside the computational domain, the DtN map is approximated with spectral accuracy. Problems
with piecewise constant and anisotropic coefficients are considered in [31, 8].
The optimal grids are useful in the context of numerical inversion, because they resolve the inconsistency
that arises from the exponential ill posedness of the problem and the second order convergence of typical
discretization schemes applied to equation (1.1), on ad-hoc grids that are usually uniform. The forward
problem for the approximation of the DtN map is the inverse of the EIT problem, so it should converge
exponentially. This can be achieved by discretizing on the optimal grids.
In this article we review the use of optimal grids in inversion, as it was developed over the last few years
in [12, 14, 13, 37, 15, 16, 52]. We present first, in section 3, the case of layered conductivity = (r) and
full boundary measurements, where the DtN map has eigenfunctions eik and eigenvalues denoted by f (k 2 ),
with integer k. Then, the forward problem can be stated as one of rational approximation of f (), for in
the complex plane, away from the negative real axis. We explain in section 3 how to compute the optimal
grid from such rational approximants and also how to use it in inversion. The optimal grid depends on the
type of discrete measurements that we make of (i.e., f ()) and so does the accuracy and stability of the
resulting approximations of .
The two dimensional problem = (r, ) is reviewed in sections 4 and 5. The easier case of full access
to the boundary, and discrete measurements at n equally distributed points on B is in section 4. There, the
grids are essentially the same as in the layered case and the finite volumes discretization leads to circular
networks with topology determined by the grids. We show how to use the discrete inverse problem theory for
circular networks developed in [22, 23, 40, 25, 26] for the numerical solution of the EIT problem. Section 5
considers the more difficult, partial boundary measurement setup, where the accessible boundary consists of
either one connected subset of B or two disjoint subsets. There, the optimal grids are truly two dimensional
Pi,j+1
Pi+1/2,j+1/2
Pi1/2,j+1/2
Pi,j+1/2
Pi,j
Pi1,j
Pi+1,j
Pi+1/2,j1/2
Pi1/2,j1/2
Pi,j1
Figure 1: Finite volume discretization on a staggered grid. The primary grid lines are solid and the dual
ones are dashed. The primary grid nodes are indicated with and the dual nodes with . The dual cell
Ci,j , with vertices (dual nodes) Pi 12 ,j 21 surrounds the primary node Pi,j . A resistor is shown as a rectangle
with axis along a primary line, that intersects a dual line at the point indicated with .
Resistor networks arise naturally in the context of finite volume discretizations of the elliptic equation (1.1)
on staggered grids with interlacing primary and dual lines that may be curvilinear, as explained in section 2.1.
Standard finite volume discretizations use arbitrary, usually equidistant tensor product grids. We consider
optimal grids that are designed to obtain very accurate approximations of the measurements of the DtN
map, the data in the inverse problem. The geometry of these grids depends on the measurement setup. We
describe in section 2.2 the type of grids used for the full measurement case, where we have access to the
entire boundary B. The grids for the partial boundary measurement setup are discussed later, in section 5.
2.1
See Figure 1 for an illustration of a staggered grid. The potential u(x) in equation (1.1) is discretized at the
primary nodes Pi,j , the intersection of the primary grid lines, and the finite volumes method balances the
fluxes across the boundary of the dual cells Cij ,
Z
Ci,j
[(x)u(x)] dx =
Z
Ci,j
(x)n(x) u(x)ds(x) = 0.
(2.1)
A dual cell Ci,j contains a primary point Pi,j , it has vertices (dual nodes) Pi 21 ,j 12 , and boundary
Ci,j = i,j+ 12 i+ 21 ,j i,j 12 i 21 ,j ,
(2.2)
the union of the dual line segments i,j 12 = (Pi 21 ,j 21 , Pi+ 12 ,j 12 ) and i 21 ,j = (Pi 21 ,j 12 , Pi 21 ,j+ 12 ). Let
us denote by P = {Pi,j } the set of primary nodes, and define the potential function U : P R as the finite
Ui,j u(Pi,j ),
4
Pi,j P.
(2.3)
The set P is the union of two disjoint sets PI and PB of interior and boundary nodes, respectively. Adjacent
nodes in P are connected by edges in the set E P P. We denote the edges by Ei,j 12 = (Pi,j , Pi,j1 )
and Ei 12 ,j = (Pi1,j , Pi,j ).
The finite volume discretization results in a system of linear equations for the potential
i+ 21 ,j (Ui+1,j Ui,j ) + i 12 ,j (Ui1,j Ui,j ) + i,j+ 12 (Ui,j+1 Ui,j ) + i,j 21 (Ui,j1 Ui,j ) = 0,
(2.4)
i,j 1
2
(2.5)
i 1 ,j
2
Equations (2.4) are Kirchhoffs law for the interior nodes in a resistor network (, ) with graph = (P, E)
where pq are the numbered nodes. They correspond to points like Pi,j in Figure 1. Let also UI and UB be
the vectors with entries given by the potential at the interior nodes and boundary nodes, respectively. The
vector of boundary fluxes is denoted by JB . We assume throughout that there are n boundary nodes, so
UB , JB Rn . The network equations are
KU =
0
JB
!
,
U=
UI
UB
!
,
K=
KII
KIB
KIB
KBB
!
,
(2.6)
Ki,j =
(E),
if i 6= j and E = (pi , pj ) E,
0,
if i 6= j and (pi , pj )
/ E,
(2.7)
(E), if i = j.
k: E=(pi ,pk )E
In (2.6) we write it in block form, with KII the block with row and column indices restricted to the interior
nodes, KIB the block with row indices restricted to the interior nodes and column indices restricted to the
boundary nodes, and so on. Note that K is symmetric, and its rows and columns sum to zero, which is just
the condition of conservation of currents.
It is shown in [22] that the potential U satisfies a discrete maximum principle. Its minimum and maximum
entries are located on the boundary. This implies that the network equations with Dirichlet boundary
conditions
KII UI = KIB UB
(2.8)
rb1 = r1 = 1
rb1 = r1 = 1
rb2
r2
rb3
r3
rb4
r4
r2
rb2
r3
rb3
r4
m1/2 = 0
m1/2 = 1
Figure 2: Examples of grids. The primary grid lines are solid and the dual ones are dotted. Both grids
have n = 6 primary boundary points, and index of the layers ` = 3. We have the type of grid indexed by
m1/2 = 0 on the left and by m1/2 = 1 on the right.
have a unique solution if KIB has full rank. That is to say, KII is invertible and we can eliminate UI from
(2.6) to obtain
JB = KBB KBI K1
KIB UB = UB .
II
(2.9)
The matrix Rnn is the Dirichlet to Neumann map of the network. It takes the boundary potential
UB to the vector JB of boundary fluxes, and is given by the Schur complement of the block KBB
= KBB KBI K1
KIB .
II
(2.10)
The DtN map is symmetric, with nontrivial null space spanned by the vector 1B Rn of all ones. The
symmetry follows directly from the symmetry of K. Since the columns of K sum to zero, K1 = 0, where 1
is the vector of all ones. Then, (2.9) gives JB = 0 = 1B , which means that 1B is in the null space of .
The inverse problem for a network (, ) is to determine the conductance function from the DtN
map . The graph is assumed known, and it plays a key role in the solvability of the inverse problem
[22, 23, 40, 25, 26]. More precisely, must satisfy a certain criticality condition for the network to be
uniquely recoverable from , and its topology should be adapted to the type of measurements that we
have. We review these facts in detail in sections 3-5. We also show there how to relate the continuum DtN
map to the discrete DtN map . The inversion algorithms in this paper use the solution of the discrete
inverse problem for networks to determine approximately the solution (x) of the continuum EIT problem.
2.2
In the full boundary measurement setup, we have access to the entire boundary B, and it is natural to
discretize the domain (1.2) with tensor product grids that are uniform in angle, as shown in Figure 2. Let
j =
2(j 1)
,
n
2 (j 1/2)
bj =
,
n
j = 1, . . . , n,
(2.11)
be the angular locations of the primary and dual nodes. The radii of the primary and dual layers are denoted
by ri and rbi , and we count them starting from the boundary. We can have two types of grids, so we introduce
the parameter m1/2 {0, 1} to distinguish between them. We have
1 = r1 = rb1 > r2 > rb2 > . . . > r` > rb` > r`+1 0
(2.12)
1 = rb1 = r1 > rb2 > r2 > . . . > r` > rb`+1 > r`+1 0
(2.13)
for m1/2 = 1. In either case there are ` + 1 primary layers and ` + m1/2 dual ones, as illustrated in Figure
2. We explain in sections 3 and 4 how to place optimally in the interval [0, 1] the primary and dual radii, so
that the finite volume discretization gives an accurate approximation of the DtN map .
The graph of the network is given by the primary grid. We follow [22, 23] and call it a circular network.
It has n boundary nodes and n(2` + m1/2 1) edges. Each edge is associated with an unknown conductance
that is to be determined from the discrete DtN map , defined by measurements of , as explained in
sections 3 and 4. Since is symmetric, with columns summing to zero, it contains n(n1)/2 measurements.
Thus, we have the same number of unknowns as data points when
2` + m1/2 1 =
n1
,
2
n = odd integer.
(2.14)
This condition turns out to be necessary and sufficient for the DtN map to determine uniquely a circular
network, as shown in [26, 23, 13]. We assume henceforth that it holds.
Layered media
In this section we assume a layered conductivity function (r) in , the unit disk, and access to the entire
boundary B. Then, the problem is rotation invariant and can be simplified by writing the potential as a
Fourier series in the angle . We begin in section 3.1 with the spectral decomposition of the continuum
and discrete DtN maps and define their eigenvalues, which contain all the information about the layered
conductivity. Then, we explain in section 3.2 how to construct finite volume grids that give discrete DtN
maps with eigenvalues that are accurate, rational approximations of the eigenvalues of the continuum DtN
map. One such approximation brings an interesting connection between a classic Sturm-Liouville inverse
spectral problem [34, 19, 38, 54, 55] and an inverse eigenvalue problem for Jacobi matrices [20], as described
in sections 3.2.3 and 3.3. This connection allows us to solve the continuum inverse spectral problem with
efficient, linear algebra tools. The resulting algorithm is the first example of resistor network inversion on
optimal grids proposed and analyzed in [14], and we review its convergence study in section 3.3.
3.1
Because equation (1.1) is separable in layered media, we write the potential u(r, ) as a Fourier series
u(r, ) = vB (0) +
X
kZ,k6=0
v(r, k)eik ,
(3.1)
r (0, 1),
(3.2)
(3.3)
1
2
u(1, ) d.
(3.4)
The boundary conditions at r = 1 are Dirichlet or Neumann, depending on which map we consider, the DtN
or the NtD map.
3.1.1
The DtN map is determined by the potential v satisfying (3.2-3.3), with Dirichlet boundary condition
v(1, k) = vB (k),
(3.5)
where vB (k) are the Fourier coefficients of the boundary potential uB (). The normal boundary flux has the
Fourier series expansion
(1)
u(1, )
= uB () = (1)
r
X
kZ,k6=0
dv(1, k) ik
e ,
dr
(3.6)
and we assume for simplicity that (1) = 1. Then, we deduce formally from (3.6) that eik are the eigenfunctions of the DtN map , with eigenvalues
f (k 2 ) =
dv(1, k)
/v(1, k).
dr
(3.7)
h
h
=
,
z(rj+1 ) z(rj )
j
j,q+ 12 =
zb(b
rj+1 ) zb(b
rj )
bj
=
,
h
h
(3.8)
dt
,
t(t)
Z
zb(r) =
(t)
dt.
t
(3.9)
bj
Uj+1,q Uj,q
Uj,q Uj1,q
j
j1
2 Uj = 0,
bj
j
j1
(3.10)
(3.11)
where
T
Uj = (Uj,1 , . . . , Uj,n ) ,
(3.12)
and 2 is the circulant matrix
1
2
1
= 2
.
h
..
1
1
2
..
...
...
1
..
...
..
.
0
..
.
...
..
...
.. ,
.
(3.13)
the discretization of the operator 2 with periodic boundary conditions. It has the eigenvectors
ik
T
e
= eik1 , . . . , eikn ,
(3.14)
with entries given by the restriction of the continuum eigenfunctions eik at the primary grid angles. Here
k is integer, satisfying |k| (n 1)/2, and the eigenvalues are k2 , where
kh
k = |k| sinc
,
2
(3.15)
To determine the spectral decomposition of the discrete DtN map we proceed as in the continuum
Vj (k) eik ,
(3.16)
|k| n1
2 ,k6=0
where we recall that 1B Rn is a vector of all ones. We obtain the finite difference equation for the
coefficients Vj (k),
bj
j
j1
k2 Vj (k) = 0,
(3.17)
dv(z, k)
dz
k 2 v(z, k) = 0,
(3.18)
v(z, k). The boundary condition at
r = 0 is mapped to
lim v(z, k) = 0,
(3.19)
and it is implemented in the discretization as V`+1 (k) = 0. At the boundary r = 1, where z = 0, we specify
V1 (k) as some approximation of vB (k).
The discrete DtN map is diagonalized in the basis {[eik ]}|k| n1 , and we denote its eigenvalues by
2
F (k2 ). Its definition depends on the type of grid that we use, indexed by m1/2 , as explained in section 2.2.
In the case m1/2 = 0, the first radius next to the boundary is r2 , and we define the boundary flux at rb1 = 1
as (V1 (k) V2 (k))/1 . When m1/2 = 1, the first radius next to the boundary is rb2 , so to compute the flux
at rb1 we introduce a ghost layer at r0 > 1 and use equation (3.17) for j = 1 to define the boundary flux as
V0 (k) V1 (k)
V1 (k) V2 (k)
=
b1 k2 V1 (k) +
.
o
1
Therefore, the eigenvalues of the discrete DtN map are
F (k2 ) = m1/2
b1 k2 +
3.1.2
V1 (k) V2 (k)
.
1 V1 (k)
(3.20)
The NtD map has eigenfunctions eik for k 6= 0 and eigenvalues f (k 2 ) = 1/f (k 2 ). Equivalently, in terms
of the solution v(z, k) of equation (3.18) with boundary conditions (3.19) and
dv(0, k)
1
=
dz
2
JB ()eik d = B (k),
we have
f (k 2 ) =
v(0, k)
.
B (k)
(3.21)
(3.22)
In the discrete case, let us use the grids with m1/2 = 1. We obtain that the potential Vj (k) satisfies (3.17)
for j = 1, 2, . . . , `, with boundary conditions
V1 (k) V0 (k)
= B (k),
0
V`+1 = 0.
(3.23)
3.2
V1 (k)
.
B (k)
(3.24)
v(0)
,
B
F () =
10
V1
,
B
(3.25)
where v solves equation (3.18) with k 2 replaced by and Vj solves equation (3.17) with k2 replaced by .
The spectral parameter may be complex, satisfying C \ (, 0]. For simplicity, we suppress in the
notation the dependence of v and Vj on . We consider in detail the discretizations on grids indexed by
m1/2 = 1, but the results can be extended to the other type of grids, indexed by m1/2 = 0.
Lemma 1. The function f () is of form
Z
d(t)
,
t
f () =
(3.26)
where (t) is the positive spectral measure on (, 0] of the differential operator dzbdz , with homogeneous
Neumann condition at z = 0 and limit condition (3.19). The function F () has a similar form
F () =
dF (t)
,
t
(3.27)
where F (t) is the spectral measure of the difference operator in (3.17) with boundary conditions (3.23).
Proof: The result (3.26) is shown in [44] and it says that f () is essentially a Stieltjes function. To
derive the representation (3.27), we write our difference equations in matrix form for V = (V1 , . . . , V` )T ,
(A I) V =
B ()
e1 .
b1
(3.28)
Here I is the ` ` identity matrix, e1 = (1, . . . , 0)T R` and A is the tridiagonal matrix with entries
(
Aij =
1
i
1
i1 i,j
b111 1,j + b111 2,j
b1i
bi i1 i1,j
bi i i+1,j
if 1 < i `, 1 j `,
if i = 1, 1 j `.
(3.29)
The Kronecker delta symbol i,j is one when i = j and zero otherwise. Note that A is a Jacobi matrix when
it is defined on the vector space R` with weighted inner product
ha, bi =
`
X
bj aj bj ,
a = (a1 , . . . , a` )T , b = (b1 , . . . , b` )T .
(3.30)
j=1
That is to say,
1/2
1/2
1/2
1/2
e = diag
b`
A
b1 , . . . ,
b`
A diag
b1 , . . . ,
(3.31)
is a symmetric, tridiagonal matrix, with negative entries on its diagonal and positive entries on its upper/lower diagonal. It follows from [20] that A has simple, negative eigenvalues j2 and eigenvectors
T
Yj = (Y1,j , . . . , Y`,j ) that are orthogonal with respect to the inner product (3.30). We order the eigenvalues as
1 < 2 < . . . < ` ,
11
(3.32)
`
X
bp2 Yp,j
= 1.
(3.33)
p=1
Then, we obtain from (3.25) and (3.28), after expanding V in the basis of the eigenvectors, that
F () =
`
2
X
Y1,j
.
+ j2
j=1
(3.34)
`
X
j=1
j H t j2 ,
2
j = Y1,j
,
(3.35)
F () =
b1 +
1 + . . .
(3.36)
b` +
1
`
This representation is known in the theory of rational function approximations [59, 44] and its derivation is
given in appendix B.
Since both f () and F () are Stieltjes functions, we can design finite volume schemes (i.e., layered networks) with accurate, rational approximations F () of f (). There are various approximants F (), with
different rates of convergence to f (), as ` . We discuss two choices below, in sections 3.2.2 and 3.2.3,
but refer the reader to [30, 29, 32] for details on various Pade approximants and the resulting discretization
schemes. No matter which approximant we choose, we can compute the network conductances, i.e., the
parameters j and
bj for j = 1, . . . , `, from 2` measurements of f (). The type of measurements dictates
the type of approximant, and only some of them are directly accessible in the EIT problem. For example,
the spectral measure () cannot be determined in a stable manner in EIT. However, we can measure the
eigenvalues f (k 2 ) for integer k, and thus we can design a rational, multi-point Pade approximant.
Remark 1. We describe in detail in appendix D how to determine the parameters {j ,
bj }j=1,...,` from 2`
n1
2
point measurements of f (), such as f (k ), for k = 1, . . . , 2 = 2`. The are two steps. The first is to
write F () as the ratio of two polynomials of , and determine the 2` coefficients of these polynomials from
the measurements F (k2 ) of f (k 2 ), for 1 k
n1
2 .
12
The exponential instability of EIT comes into play in this step, because it involves the inversion of a Vandermonde matrix. It is known [33] that such matrices have condition numbers that grow exponentially with
the dimension `. The second step is to determine the parameters {j ,
bj }j=1,...,` from the coefficients of the
polynomials. This can be done in a stable manner with the Euclidean division algorithm [47].
The approximation problem can also be formulated in terms of the DtN map, with F () = 1/F ().
Moreover, the representation (3.36) generalizes to both types of grids, by replacing
b1 with
b1 m1/2 . Recall
equation (3.20) and note the parameter
b1 does not play any role when m1/2 = 0.
3.2.1
rj
j =
rj+1
dr
,
r(r)
bj =
r
bj
r
bj+1
(r)
dr,
r
j = 1, . . . , `,
(3.37)
we could determine the optimal placement of the radii rj and rbj , if we knew the conductivity (r). But (r)
is the unknown in the inverse problem. The key idea behind the resistor network approach to inversion is
that the grid depends only weakly on , and we can compute it approximately for the reference conductivity
(o) 1.
Let us denote by f (o) () the analog of (3.25) for conductivity (o) , and let F (o) () be its rational
(o)
(o)
Z
=
(o)
rj
(o)
rj+1
(o)
and
bj
(o)
rj
dr
= log (o) ,
r
rj+1
(o)
bj
Z
=
r
bj
r
bj+1
given by
(o)
rbj
dr
= log (o) ,
r
rbj+1
j = 1, . . . , `.
(3.38)
(o)
= exp
j
X
q=1
!
q(o)
(o)
rbj+1
= exp
j
X
bq(o)
j = 1, . . . , `.
(3.39)
q=1
We call the radii (3.39) optimal. The name refers to the fact that finite volume discretizations on grids with
such radii give an NtD map that matches the measurements of the continuum map (o) for the reference
conductivity (o) .
(o)
(o)
(o)
and {j ,
bj } so that
where k = 1, . . . , (n 1)/2. This is because the distribution of the radii (3.39) in the interval [0, 1] depends
on what measurements we make, as illustrated with examples in sections 3.2.2 and 3.2.3.
13
n1
2
mapping Qn defined on Dn , with values in R+2 . It takes the measurements of f () and returns the
(n 1)/2 positive numbers
j+1m1/2
bj+m1/2
bj
(o)
bj
j = 2 m1/2 , . . . `,
(o)
j
,
j
j = 1, 2, . . . , `.,
(3.40)
where we recall the relation (2.14) between ` and n. We call Qn a reconstruction mapping because if we
(o)
take j and
bj as point values of a conductivity at nodes rj
(o)
grid, we expect to get a conductivity that is close to the interpolation of the true (r). This is assuming that
the grid does not depend strongly on (r). The proof that the resulting sequence of conductivity functions
indexed by ` converges to the true (r) as ` is carried out in [14], given the spectral measure of f ().
We review it in section 3.3, and discuss the measurements in section 3.2.3. The convergence proof for other
measurements remains an open question, but the numerical results indicate that the result should hold.
Moreover, the ideas extend to the two dimensional case, as explained in detail in sections 4 and 5.
3.2.2
Let us begin with an example that arises in the discretization of the problem with lumped current measurements
1
Jq =
h
for h =
2
n ,
bq+1
uB ()d,
bq
T
1
h
bq+1
bq
kh ikq
f (k 2 )
e
eik d = f (k 2 ) sinc
=
k eikq ,
2
|k|
ik
ik1
]= e
,...,e
n1
2 , define
ikn T
q = 1, . . . , n.
(3.41)
, and eigenvalues
f (k2 )
|k| k .
The approximation problem is to find the finite volume discretization with DtN map = Mn ( ).
Since both and Mn have the same eigenvectors, this is equivalent to the rational approximation problem
of finding the network conductances (3.8) (i.e., j and
bj ), so that
F (k2 ) =
f (k 2 )
k ,
|k|
k = 1, . . . ,
n1
.
2
(3.42)
The eigenvalues depend only on |k|, and the case k = 0 gives no information, because it corresponds to
constant boundary potentials that lie in the null space of the DtN map. This is why we take in (3.42) only
the positive values of k, and obtain the same number (n 1)/2 of measurements as unknowns: {j }j=1,...,`
and {b
j }j=2m1/2 ,...,` .
When we compute the optimal grid, we take the reference (o) 1, in which case f (o) (k 2 ) = |k|. Thus,
14
0.2
0.4
r [0,1]
0.6
0.8
0.2
0.4
r [0,1]
0.6
0.8
Figure 3: Examples of optimal grids with n equidistant boundary points and primary and dual radii shown
with and . On the left we have n = 25 and a grid indexed by m1/2 = 1, with ` = m + 1 = 6. On the right
we have n = 35 and a grid indexed by m1/2 = 0, with ` = m + 1 = 8. The grid shown in red is computed
with formulas (3.44). The grid shown in blue is obtained from the rational approximation (3.50).
the optimal grid computation reduces to that of rational interpolation of f (),
F (o) (k2 ) = k = f (o) (k2 ),
k = 1, . . . ,
n1
.
2
(3.43)
(o)
This is solved explicitly in [10]. For example, when m1/2 = 1, the coefficients j
(o)
j
h
= h cot
(2` 2j + 1) ,
2
(o)
bj
h
= h cot
(2` 2j + 2) ,
2
(o)
and
bj
are given by
j = 1, 2 . . . , `, (3.44)
and the radii follow from (3.39). They satisfy the interlacing relations
(o)
(o)
(o)
(o)
(o)
(o)
(3.45)
as can be shown easily using the monotonicity of the cotangent and exponential functions. We show an
illustration of the resulting grids in red, in Figure 3. Note the refinement toward the boundary r = 1 and
the coarsening toward the center r = 0 of the disk. Note also that the dual points shown with are almost
(o)
half way between the primary points shown with . The last primary radii r`+1 are small, but the points
do not reach the center of the domain at r = 0.
In sections 4 and 5 we work with slightly different measurements of the DtN map = Mn ( ), with
entries defined by
Z
( )p,q =
p () q ()d,
0
p 6= q,
( )p,p =
( )p,q ,
(3.46)
q6=p
using the non-negative measurement (electrode) functions q () that are compactly supported in (bq , bq+1 ),
and are normalized by
Z
q ()d = 1.
0
1
h ,
0,
otherwise.
15
and obtain after a calculation given in appendix C that the entries of are given by
( )p,q =
2
1 X ik(p q )
kh
e
f (k 2 ) sinc
,
2
2
p, q, = 1, . . . , n.
(3.47)
kZ
|k|
n1
,
2
(3.48)
with eigenvectors eik defined in (3.14) and scaled eigenvalues
2
kh
kh
2
2
2
e
.
F (k ) = f (k ) sinc
= F (k ) sinc
2
2
(3.49)
sinc
,
k
2
2
(3.50)
but we can compute it as explained in Remark 1 and appendix D. We show in Figure 3 two examples of the
grids, and note that they are very close to those obtained from the rational interpolation (3.43). This is not
surprising because the sinc factor in (3.50) is not significantly different from 1 over the range |k|
n1
2 ,
sin 2 1 n1
2
kh
1.
<
sinc
2
2 1 n
Thus, many eigenvalues Fe(o) (k2 ) are approximately equal to k , and this is why the grids are similar.
3.2.3
Another example of rational approximation arises in a modified problem, where the positive spectral measure
in Lemma 1 is discrete
(t) =
X
j=1
j H t j2 .
(3.51)
This does not hold for equation (3.2) or equivalently (3.18), where the origin of the disc r = 0 is mapped to
in the logarithmic coordinates z(r), and the measure (t) is continuous. To obtain a measure like (3.51),
we change the problem here and in the next section to
r d
dv(r)
r(r)
v(r) = 0,
(r) dr
dr
r (, 1),
(3.52)
16
v() = 0.
(3.53)
The Dirichlet boundary condition at r = may be realized if we have a perfectly conducting medium in the
disk concentric with and of radius . Otherwise, v() = 0 gives an approximation of our problem, for small
but finite .
Coordinate change and scaling
It is convenient here and in the next section to introduce the scaled logarithmic coordinate
1
z (o) (r)
=
(r) =
Z
Z
Z
r
dt
,
t
(3.54)
Z
0
dt
= z 0 (),
(r(t))
zb(r)
=
Z
(r(t))dt = zb 0 ().
(3.55)
r() = eZ ,
(3.56)
v(r(z 0 ))
B
(3.57)
dv 0
dz 0
0 v 0
dv(0)
dz 0
= 1,
z 0 (0, L0 ),
0,
L = z (1) =
0
v(L0 ) = 0,
(3.58)
dt
.
0 (t)
(3.59)
Remark 3. We assume in the remainder of this section and in section 3.3 that we work with the scaled
equations (3.58) and drop the primes for simplicity of notation.
The inverse spectral problem
The differential operator
d d
db
z dz
at z = 0 and Dirichlet conditions at z = L is symmetric with respect to the weighted inner product
Z
(a, b) =
b
L
Z
a(z)b(z)db
z=
a(z())b(z())()d,
0
17
b = zb(1).
L
(3.60)
It has negative eigenvalues {j2 }j=1,2,... , the points of increase of the measure (3.51), and eigenfunctions
yj (z). They are orthogonal with respect to the inner product (3.60), and we normalize them by
kyj k2 = (yj , yj ) =
Z
0
b
L
yj2 (z)db
z = 1.
(3.61)
(3.62)
For the discrete problem we assume in the remainder of the section that m1/2 = 1, and work with the
NtD map, that is with F () represented in Lemma 1 in terms of the discrete measure F (t). Comparing
(3.51) and (3.35), we note that we ask that F (t) be the truncated version of (t), given the first ` weights j
and eigenvalues j2 , for j = 1, . . . , `. We arrived at the classic inverse spectral problem [34, 19, 38, 54, 55],
that seeks an approximation of the conductivity from the truncated measure. We can solve it using the
theory of resistor networks, via an inverse eigenvalue problem [20] for the Jacobi like matrix A defined in
(3.29). The key ingredient in the connection between the continuous and discrete eigenvalue problems is the
optimal grid, as was first noted in [12] and proved in [14]. We review this result in section 3.3.
The truncated measure optimal grid
The optimal grid is obtained by solving the discrete inverse problem with spectral data for the reference
conductivity (o) (),
Dn(o) =
(o)
(o)
(o)
= 2, j
1
,
= j
2
(o)
j = 1, . . . , ` .
(3.63)
(o)
The parameters {j ,
bj }j=1,...,` can be determined from Dn with the Lanczos algorithm [65, 20] reviewed
(o)
(o)
j+1 = j + j
j
X
(o)
(o)
(o)
bj+1 =
bj + bj =
q(o) ,
q=1
(o)
where 1
j
X
bq(o) ,
j = 1, . . . , `,
(3.64)
q=1
(o)
= b1 = 0. This is in the logarithmic coordinates that are related to the optimal radii as in
(3.56). The grid is calculated explicitly in [14, Appendix A]. We summarize its properties in the next lemma,
for large `.
(o)
(o)
(o)
(o)
(o)
(o)
(o)
b1 < 1 <
(3.65)
(o)
j
2+O [(`j)1 +j 2 ]
`2 j 2
2+O(`1 )
,
`
if 1 j ` 1,
if j = `,
18
(3.66)
0.2
0.4
0.6
0.8
Figure 4:
Example of truncated measure optimal grid with ` = 6. This is in the logarithmic scaled
coordinates [0, 1]. The primary points are denoted with and the dual ones with .
0.2
0.4
0.6
0.8
Figure 5: The radial grid obtained with the coordinate change r = eZ . The scale Z = log affects
the distribution of the radii. The choice = 0.1 is in blue, = 0.05 is in red and = 0.01 is in black. The
primary radii are indicated with and the dual ones with .
and the dual grid steps are
(o)
bj
2 + O (` + 1 j)1 + j 2
p
=
,
`2 (j 1/2)2
1 j `.
(3.67)
We show in Figure 4 an example for the case ` = 6. To compare it with the grid in Figure 3, we plot in
Figure 5 the radii given by the coordinate transformation (3.56), for three different parameters . Note that
the primary and dual points are interlaced, but the dual points are not half way between the primary points,
as was the case in Figure 3. Moreover, the grid is not refined near the boundary at r = 1. In fact, there is
accumulation of the grid points near the center of the disk, where we truncate the domain. The smaller the
truncation radius , the larger the scale Z = log , and the more accumulation near the center.
Intuitively, we can say that the grids in Figure 3 are much superior to the ones computed from the
truncated measure, for both the forward and inverse EIT problem. Indeed, for the forward problem, the rate
of convergence of F () to f () on the truncated measure grids is algebraic [14]
X
1
j
= O
f () F () =
=O 1 .
2
j2
`
j=`+1 + j
j=`+1
The rational interpolation grids described in section 3.2.2 give exponential convergence of F () to f ()
[51]. For the inverse problem, we expect that the resolution of reconstructions of decreases rapidly away
from the boundary where we make the measurements, so it makes sense to invert on grids like those in Figure
3, that are refined near r = 1.
The examples in Figures 3 and 5 show the strong dependence of the grids on the measurement setup.
Although the grids in Figure 5 are not good for the EIT problem, they are optimal for the inverse spectral
problem. The optimality is in the sense that the grids give an exact match of the spectral measurements
(3.63) of the NtD map for conductivity (o) . Furthermore, they give a very good match of the spectral
19
measurements (3.68) for the unknown , and the reconstructed conductivity on them converges to the true
, as we show next.
3.3
Let Qn : Dn R2`
+ be the reconstruction mapping that takes the data
Dn = {j , j ,
to the 2` =
n1
2
j = 1, . . . , `}
(3.68)
positive values {j ,
bj }j=1,...,` given by
j =
(o)
bj
,
(o)
bj
bj+1 =
j
,
j
j = 1, 2, . . . , `.
(3.69)
The computation of {j ,
bj }j=1,...,` requires solving the discrete inverse spectral problem with data Dn ,
using for example the Lanczos algorithm reviewed in appendix E. We define the reconstruction ` () of the
conductivity as the piecewise constant interpolation of the point values (3.69) on the optimal grid (3.64).
We have
(o) (o)
if [j , bj+1 ), j = 1, . . . , `,
j ,
(o) (o)
(3.70)
` () =
bj ,
if [bj , j ), j = 2, . . . , ` + 1,
(o)
b`+1 ,
if [l+1 , 1]
and we discuss here its convergence to the true conductivity function (), as ` .
To state the convergence result, we need some assumptions on the decay with j of the perturbations of
(o)
j = j j ,
j = j j .
(3.71)
The asymptotic behavior of j and j is well known, under various smoothness requirements on (z) [55, 60,
21]. For example, if () H 3 [0, 1], we have
j = j
(o)
j
R1
=
q()d
(o)
+ O j 2 and j = j j = O j 2 ,
(2j 1)
0
(3.72)
12
q() = ()
d2 () 2
.
d 2
(3.73)
1
s
j log(j)
,
j = O
20
1
js
,
(3.74)
q()d 6= 0,
q=
0
(3.75)
(o)
(q)
bj ,
n
(o)
and
bj
(q)
with j
for j = 1, . . . , `. These are computed by solving the discrete inverse spectral problem with data
o
j = 1, . . . , ` , for conductivity function
(q) (q)
j , j ,
(q) () =
2
1 q
e
+ e q .
4
(3.76)
(q) ()
=q
d 2
q
(q) ()
for
d (q) (0)
=0
d
0 < 1,
(q)
2
,
4
(q)
(q)
(3.77)
(3.78)
and j j
asymptotically the same as the optimal grid, calculated for (o) . Thus, the convergence result in Theorem 1
applies after all, without changing the definition of the reconstruction (3.70).
3.3.1
d2 w()
( + q)w() = 0,
d 2
dw(0)
= 1,
d
(0, 1),
(3.79)
w(1) = 0,
p
(q)
by letting w() = v() (q) (). Thus, the eigenfunctions yj () of the differential operator associated with
(o)
yj ()
(q)
yj () = p
.
(q) ()
(3.80)
(q)
Z
0
(o)
yj ()yp(o) ()d = jp ,
(3.81)
h
i2 h
i2
(q)
(o)
(o)
= yj (0) = yj (0) = j ,
j = 1, 2, . . .
(3.82)
j = 1, 2, . . .
(3.83)
(q)
Let {j ,
bj }j=1,...,` be the parameters obtained by solving the discrete inverse spectral problem with
(q)
(q)
(q)
bj
,
(o)
bj
n1
2
pointwise values
(o)
(q)
bj+1 =
(q)
j = 1, . . . , `.
(3.84)
We have the following result stated and proved in [14]. See the review of the proof in appendix F.
(q)
1
(o)
bj
(q)
Moreover,
bj+1 =
(q)
j+1
(q)
j
(o)
q
q
(q)
(q)
q
j j1
q j(q)
(o)
j1
q
q
(q)
(q)
q
1
2
1
q 1(q)
(o)
(o)
b1
1
0,
j = 2, 3, . . . , `,
0,
1 = 1.
(q)
(3.85)
q
(q) (q)
j j+1 , for j = 1, . . . , `.
The convergence of the reconstruction (q),` () follows from this lemma and a standard finite-difference
error analysis [36] on the optimal grid satisfying Lemma 2. The reconstruction is defined as in (3.70), by the
piecewise constant interpolation of the point values (3.84) on the optimal grid.
Theorem 2. As ` we have
(q)
(o)
max j (q) (j ) 0
1j`
(q)
(o)
max b
j+1 (q) (bj+1 ) 0,
and
1j`
(3.86)
(q)
j
(q)
j+1
X
d
=
(q) ,
(q) () p=1 p
(q)
bj+1
(q) ()d =
j
X
bp(q) ,
j = 1, . . . , `,
(q)
(q)
= b1 = 0,
(3.87)
p=1
and satisfies
(q)
(o)
max j j 0,
1j`+1
(q)
(o)
max bj bj 0,
1j`+1
22
as ` .
(3.88)
3.3.2
The proof given in detail in [14] has two main steps. The first step is to establish the compactness of the
set of reconstructed conductivities. The second step uses the established compactness and the uniqueness of
solution of the continuum inverse spectral problem to get the convergence result.
Step 1: Compactness
We show here that the sequence { ` ()}`1 of reconstructions (3.70) has bounded variation.
Lemma 4. The sequence {j ,
bj+1 }j=1,...,` (3.69) returned by the reconstruction mapping Qn satisfies
`
X
j=1
`
X
|log
bj+1 log j | +
|log
bj+1 log j+1 | C,
j=1
(3.89)
where C is independent of `. Therefore the sequence of reconstructions { ` ()}`1 has uniformly bounded
variation.
Our original formulation is not convenient for proving (3.89), because when written in Schrodinger form,
it involves the second derivative of the conductivity as seen from (3.73). Thus, we rewrite the problem in
first order system form, which involves only the first derivative of (), which is all we need to show (3.89).
At the discrete level, the linear system of ` equations
AV V =
e1
b1
(3.90)
BH 2 W
1
e1
H 2 W =
b
1
(3.91)
T
c2 , . . . , W` , W
c`+1
for the vector W = W1 , W
with components
Wj =
j Vj ,
cj+1
W
Vj+1 Vj
bj+1
=p
j
(o)
!
,
j = 1, . . . , `.
(3.92)
(o)
(o)
(o)
(o)
Here H = diag
b1 , 1 , . . . ,
b` , `
and B is the tridiagonal, skew-symmetric matrix
B=
0
.
..
...
..
. . .
..
.
2`1
...
23
(3.93)
with entries
2p
2p1
s
s
1
1
bp+1
bp+1
(o)
p
=q
= 2p
,
(o) (o)
p
p+1
p
bp+1
p
bp+1
s
s
1
bp+1
bp+1
1
(o)
p
= 2p1
.
=q
p
p
(o) (o)
p
bp
p
bp
(3.94)
(3.95)
2`1
X
p=1
(3.96)
and we can prove (3.89) by using a method of small perturbations. Recall definitions (3.71) and let
jr = rj ,
jr = rj ,
j = 1, . . . , `,
(3.97)
where r [0, 1] is an arbitrary continuation parameter. Let also jr be the entries of the tridiagonal,
(o)
(o)
+ jr and jr = j
+ jr , for
j = 1, . . . , `. We explain in appendix G how to obtain explicit formulae for the perturbations d log jr in
terms of the eigenvalues and eigenvectors of matrix Br and perturbations djr = j dr and djr = j dr.
These perturbations satisfy the uniform bound
2`1
X
j=1
d log jr C1 |dr|,
(3.98)
j
(o)
j
=
0
d log jr
(3.99)
j
log (o) C1 and (3.89) follows from (3.96).
j
2`1
X
j=1
Step 2: Convergence
Recall section 3.2 where we state that the eigenvectors Yj of Aare orthonormal
with respect to the weighted
1
1
2
2
e
inner product (3.30). Then, the matrix Y with columns diag
b ,...,
b Yj is orthogonal and we have
1
eY
eT
Y
11
=
b1
`
X
j=1
24
j = 1.
(3.100)
Similarly
(o)
b1
`
X
(o)
(o)
= 2`b
1 = 1,
(3.101)
j=1
1 =
b1
(o)
b1
`
X
(o)
= 1 +
b1
1
j
(o)
= 1 + O(b
1 ) = 1 + O
j=1
1
.
`
(3.102)
But ` (0) = 1 , and since ` () has bounded variation by Lemma 4, we conclude that ` () is uniformly
bounded in [0, 1].
Now, to show that ` () () in L1 [0, 1], suppose for contradiction that it does not. Then, there exists
k `k kL1 [0,1] .
But since this subsequence is bounded and has bounded variation, we conclude from Hellys selection principle
and the compactness of the embedding of the space of functions of bounded variation in L1 [0, 1] [57] that it
has a convergent subsequence pointwise and in L1 [0, 1]. Call again this subsequence `k and denote its limit
by ? 6= . Since the limit is in L1 [0, 1], we have by definitions (3.55) and Remark 3,
`k
z(; ) =
0
dt
z(; ) =
`k (t)
Z
0
dt
,
? (t)
`k
zb(; ) =
`k
(t)dt zb(; ) =
(t)dt. (3.103)
0
has a unique solution [35, 49, 21, 60], we must have ? = . We have reached a contradiction, so ` () ()
in L1 [0, 1]. The pointwise convergence can be proved analogously.
Remark 4. All the elements of the proof presented here, except for establishing the bound (3.98), apply
to any measurement setup. The challenge in proving convergence of inversion on optimal grids for general
measurements lies entirely in obtaining sharp stability estimates of the reconstructed sequence with respect to
perturbations in the data. The inverse spectral problem is stable, and this is why we could establish the bound
(3.98). The EIT problem is exponentially unstable, and it remains an open problem to show the compactness
of the function space of reconstruction sequences ` from measurements such as (3.49).
We now consider the two dimensional EIT problem, where = (r, ) and we cannot use separation of
variables as in section 3. More explicitly, we cannot reduce the inverse problem for resistor networks to
one of rational approximation of the eigenvalues of the DtN map. We start by reviewing in section 4.1
the conditions of unique recovery of a network (, ) from its DtN map , defined by measurements of
the continuum . The approximation of the conductivity from the network conductance function is
described in section 4.2.
25
4.1
The unique recoverability from of a network (, ) with known circular planar graph is established in
[25, 26, 22, 23]. A graph = (P, E) is called circular and planar if it can be embedded in the plane with
no edges crossing and with the boundary nodes lying on a circle. We call by association the networks with
such graphs circular planar. The recoverability result states that if the data is consistent and the graph is
critical then the DtN map determines uniquely the conductance function . By consistent data we mean
that the measured matrix belongs to the set of DtN maps of circular planar resistor networks.
A graph is critical if and only if it is well-connected and the removal of any edge breaks the wellconnectedness. A graph is well-connected if all its circular pairs (P, Q) are connected. Let P and Q be two
sets of boundary nodes with the same cardinality |P | = |Q|. We say that (P, Q) is a circular pair when the
nodes in P and Q lie on disjoint segments of the boundary B. The pair is connected if there are |P | disjoint
paths joining the nodes of P to the nodes of Q.
A symmetric n n real matrix is the DtN map of a circular planar resistor network with n boundary
nodes if its rows sum to zero 1 = 0 (conservation of currents) and all its circular minors ( )P,Q have
non-positive determinant. A circular minor ( )P,Q is a square submatrix of defined for a circular
pair (P, Q), with row and column indices corresponding to the nodes in P and Q, ordered according to a
predetermined orientation of the circle B. Since subsets of P and Q with the same cardinality also form
circular pairs, the determinantal inequalities are equivalent to requiring that all circular minors be totally
non-positive. A matrix is totally non-positive if all its minors have non-positive determinant.
Examples of critical networks were given in section 2.2, with graphs determined by tensor product
grids. Criticality of such networks is proved in [22] for an odd number n of boundary points. As explained
in section 2.2 (see in particular equation (2.14)), criticality holds when the number of edges in E is equal to
the number n(n 1)/2 of independent entries of the DtN map .
The discussion in this section is limited to the tensor product topology, which is natural for the full
boundary measurement setup. Two other topologies admitting critical networks (pyramidal and two-sided),
are discussed in more detail in sections 5.2.1 and 5.2.2. They are better suited for partial boundary measurements setups [16, 17].
Remark 5. It is impossible to recover both the topology and the conductances from the DtN map of a
network. An example of this indetermination is the so-called Y transformation given in figure 6. A
critical network can be transformed into another by a sequence of Y transformations without affecting
the DtN map [23].
4.1.1
Ingerman and Morrow [42] showed that pointwise values of the kernel of at any n distinct nodes on
B define a matrix that is consistent with the DtN map of a circular planar resistor network, as defined
above. We consider a generalization of these measurements, taken with electrode functions q (), as given
in equation (3.46). It is shown in [13] that the measurement operator Mn in (3.46) gives a matrix Mn ( )
that belongs to the set of DtN maps of circular planar resistor networks. We can equate therefore
Mn ( ) = ,
26
(4.1)
s
q
Figure 6: Given some conductances in the Y network, there is a choice of conductances in the network
for which the two networks are indistinguishable from electrical measurements at the nodes p, q and r.
and solve the inverse problem for the network (, ) to determine the conductance from the data .
4.2
To approximate (x) from the network conductance we modify the reconstruction mapping introduced in
section 3.2 for layered media. The approximation is obtained by interpolating the output of the reconstruction
mapping on the optimal grid computed for the reference (o) 1. This grid is described in sections 2.2 and
3.2.2. But which interpolation should we take? If we could have grids with as many points as we wish, the
choice of the interpolation would not matter. This was the case in section 3.3, where we studied the continuum
limit n for the inverse spectral problem. The EIT problem is exponentially unstable and the whole idea
of our approach is to have a sparse parametrization of the unknown . Thus, n is typically small, and the
approximation of should go beyond ad-hoc interpolations of the parameters returned by the reconstruction
mapping. We show in section 4.2.3 how to approximate with a Gauss-Newton iteration preconditioned
with the reconstruction mapping. We also explain briefly how one can introduce prior information about
in the inversion method.
4.2.1
The idea behind the reconstruction mapping is to interpret the resistor network (, ) determined from the
measured = Mn ( ) as a finite volumes discretization of the equation (1.1) on the optimal grid computed
for (o) 1. This is what we did in section 3.2 for the layered case, and the approach extends to the two
dimensional problem.
The conductivity is related to the conductances (E), for E E, via quadrature rules that approximate
the current fluxes (2.5) through the dual edges. We could use for example the quadrature in [15, 16, 52],
where the conductances are
L(a,b )
,
(4.2)
a,b = (Pa,b )
L(Ea,b )
(a, b) i, j 12 , i 12 , j and L denotes the arc length of the primary and dual edges E and (see
section 2.1 for the indexing and edge notation). Another example of quadrature is given in [13]. It is
specialized to tensor product grids in a disk, and it coincides with the quadrature (3.8) in the case of layered
media. For inversion purposes, the difference introduced by different quadrature rules is small (see [15,
Section 2.4]).
27
To define the reconstruction mapping Qn , we solve two inverse problems for resistor networks. One
with the measured data = Mn ( ), to determine the conductance , and one with the computed data
(o)
(o) = Mn ( ), for the reference (o) 1. The latter gives the reference conductance (o) which we
associate with the geometrical factor in (4.2)
(o)
a,b
L(a,b )
,
L(Ea,b )
a,b
(o)
a,b
(4.3)
(4.4)
Note that (4.4) becomes (3.40) in the layered case, where (3.8) gives j = h /j+ 21 ,q and
bj = h j,q+ 12 .
The factors h cancel out.
Let us call Dn the set in Re of e = n(n 1)/2 independent measurements in Mn ( ), obtained by
removing the redundant entries. Note that there are e edges in the network, as many as the number of the
data points in Dn , given for example by the entries in the upper triangular part of Mn ( ), stacked column
by column in a vector in Re . By the consistency of the measurements (section 4.1.1), Dn coincides with
the set of the strictly upper triangular parts of the DtN maps of circular planar resistor networks with n
boundary nodes. The mapping Qn : Dn Re+ associates to the measurements in Dn the e positive values
a,b in (4.4).
We illustrate in Figure 7(b) the output of the mapping Qn , linearly interpolated on the optimal grid.
It gives a good approximation of the conductivity that is improved further in Figure 7(c) with the Gauss-
Newton iteration described below. The results in Figure 7 are obtained by solving the inverse problem for
the networks with a fast layer peeling algorithm [22]. Optimization can also be used for this purpose, at
some additional computational cost. In any case, because we have relatively few n(n 1)/2 parameters, the
cost is negligible compared to that of solving the forward problem on a fine grid.
4.2.2
The definition of the tensor product optimal grids considered in sections 2.2 and 3 does not extend to
partial boundary measurement setups or to non-layered reference conductivity functions. We present here an
alternative approach to determining the location of the points Pa,b at which we approximate the conductivity
in the output (4.4) of the reconstruction mapping. This approach extends to arbitrary setups, and it is based
on the sensitivity analysis of the conductance function to changes in the conductivity [16].
The sensitivity grid points are defined as the maxima of the sensitivity functions D a,b (x). They are
the points at which the conductances a,b are most sensitive to changes in the conductivity. The sensitivity
functions D (x) are obtained by differentiating the identity () = Mn ( ) with respect to ,
1
vec (Mn (DK )(x)) ,
(D ) (x) = D | =Mn ( )
x .
(4.5)
The left hand side is a vector in Re . Its kth entry is the Frechet derivative of conductance k with respect
28
smooth
1.8
1.4
pcws constant
2.5
2
1.5
1
0.5
(a)
(b)
(c)
Figure 7: (a) True conductivity phantoms. (b) The output of the reconstruction mapping Qn , linearly
interpolated on a grid obtained for layered media as in section 3.2.2. (c) One step of Gauss-Newton improves
the reconstructions.
to changes in the conductivity . The entries of the Jacobian D Ree are
(D )jk =
vec
,
(4.6)
where vec(A) denotes the operation of stacking in a vector in Re the entries in the strict upper triangular
part of a matrix A Rnn . The last factor in (4.5) is the sensitivity of the measurements to changes of the
conductivity, given by
BB
i 6= j,
X Z
i = j.
(4.7)
k6=iBB
Here K (x, y) is the kernel of the DtN map evaluated at points x and y B. Its Jacobian to changes in the
conductivity is
29
(4.8)
(0)
D 3/2,0 /3/2,0
(0)
D 3,1/2 /3,1/2
D 1,1/2 /1,1/2
D 5/2,0 /5/2,0
(0)
D 2,1/2 /2,1/2
(0)
(0)
D 7/2,0 /7/2,0
(0)
Figure 8: Sensitivity functions diag (1/ (0) )D computed around the conductivity = 1 for n = 13. The
images have a linear scale from dark blue to dark red spanning their maximum in absolute value. Light
green corresponds to zero. We only display 6 sensitivity functions, the other ones can be obtained by integer
multiple of 2/13 rotations. The primary grid is displayed in solid lines and the dual grid in dotted lines.
The maxima of the sensitivity functions are very close to those of the optimal grid (intersection of solid and
dotted lines).
where G is the Greens function of the differential operator u (u) with Dirichlet boundary conditions,
and n(x) is the outer unit normal at x B. For more details on the calculation of the sensitivity functions
evaluated at = (o) 1.
(4.9)
We display in Figure 8 the sensitivity functions with the superposed optimal grid obtained as in section 3.2.2.
Note that the maxima of the sensitivity functions are very close to the optimal grid points in the full
measurements case.
4.2.3
Since the reconstruction mapping Qn gives good reconstructions when properly interpolated, we can think of
it as an approximate inverse of the forward map Mn ( ) and use it as a non-linear preconditioner. Instead
30
(4.10)
Here is the conductivity that we would like to recover. For simplicity the minimization (4.10) is formulated
with noiseless data and no regularization. We refer to [17] for a study of the effect of noise and regularization
on the minimization (4.10).
The positivity constraints in (4.10) can be dealt with by solving for the log-conductivity = ln()
instead of the conductivity . With this change of variable, the residual in (4.10) can be minimized with
the standard Gauss-Newton iteration, which we write in terms of the sensitivity functions (4.5) evaluated at
(j) = exp (j) :
(j+1) = (j) diag (1/ (0) ) D diag (exp (j) )
Qn (vec (Mn (exp (j) ))) Qn (vec (Mn ( ))) .
(4.11)
The superscript denotes the Moore-Penrose pseudoinverse and the division is understood componentwise.
We take as initial guess the log-conductivity (0) = ln (0) , where (0) is given by the linear interpolation
of Qn (vec (Mn ( )) on the optimal grid (i.e. the reconstruction from section 4.2.1). Having such a good
initial guess helps with the convergence of the Gauss-Newton iteration. Our numerical experiments indicate
that the residual in (4.10) is mostly reduced in the first iteration [13]. Subsequent iterations do not change
significantly the reconstructions and result in negligible reductions of the residual in (4.10). Thus, for
all practical purposes, the preconditioned problem is linear. We have also observed in [13, 17] that the
conditioning of the linearized problem is significantly reduced by the preconditioner Qn .
Remark 6. The conductivity obtained after one step of the Gauss-Newton iteration is in the span of the
sensitivity functions (4.5). The use of the sensitivity functions as an optimal parametrization of the unknown
conductivity was studied in [17]. Moreover, the same preconditioned Gauss-Newton idea was used in [37] for
the inverse spectral problem of section 3.2.
We illustrate the improvement of the reconstructions with one Gauss-Newton step in Figure 7 (c). If
prior information about the conductivity is available, it can be added in the form of a regularization term
in (4.10). An example using total variation regularization is given in [13].
In this section we consider the two dimensional EIT problem with partial boundary measurements. As
mentioned in section 1, the boundary B is the union of the accessible subset BA and the inaccessible subset
BI . The accessible boundary BA may consist of one or multiple connected components. We assume that
the inaccessible boundary is grounded, so the partial boundary measurements are a set of Cauchy data
u|BA , (n u)|BA , where u satisfies (1.1) and u|BI = 0. The inverse problem is to determine from
problem for (o) 1. This invariance does not hold for the partial boundary measurements, so new ideas
are needed to define the optimal grids. We present two approaches in sections 5.1 and 5.2. The first one uses
circular planar networks with the same topology as before, and mappings that take uniformly distributed
points on B to points on the accessible boundary BA . The second one uses networks with topologies designed
specifically for the partial boundary measurement setups. The underlying two dimensional optimal grids are
defined with sensitivity functions.
5.1
The idea of the approach described in this section is to map the partial data problem to one with full
measurements at equidistant points, where we know from section 4 how to define the optimal grids. Since
is a unit disk, we can do this with diffeomorphisms of the unit disk to itself.
Let us denote such a diffeomorphism by F and its inverse F 1 by G. If the potential u satisfies (1.1),
then the transformed potential u
e(x) = u(F (x)) solves the same equation with conductivity
e defined by
T
e(x) =
|det G0 (y)|
(5.1)
y=F (x)
The push forward g of the DtN map is written in terms of the restrictions of diffeomorphisms G and
[0, 2),
(5.2)
for uB H 1/2 (B). It is shown in [64] that the DtN map is invariant under the push forward in the following
sense
g = G .
(5.3)
Therefore, given (5.3) we can compute the push forward of the DtN map, solve the inverse problem with
data g to obtain G , and then map it back using the inverse of (5.2). This requires the full knowledge
of the DtN map. However, if we use the discrete analogue of the above procedure, we can transform the
discrete measurements of on BA to discrete measurements at equidistant points on B, from which we can
estimate
e as described in section 4.
There is a major obstacle to this procedure: The EIT problem is uniquely solvable just for isotropic
conductivities. Anisotropic conductivities are determined by the DtN map only up to a boundary-preserving
diffeomorphism [64]. Two distinct approaches to overcome this obstacle are described in sections 5.1.1 and
5.1.2. The first one uses conformal mappings F and G, which preserve the isotropy of the conductivity, at
the expense of rigid placement of the measurement points. The second approach uses extremal quasiconformal mappings that minimize the artificial anisotropy of
e introduced by the placement at our will of the
measurement points in BA .
32
= n +1
2
= n +1
2
= n +3
2
= n +3
2
Figure 9: The optimal grid in the unit disk (left) and its image under the conformal mapping F (right).
Primary grid lines are solid black, dual grid lines are dotted black. Boundary grid nodes: primary , dual
. The accessible boundary segment BA is shown in solid red.
5.1.1
Conformal mappings
The push forward G of an isotropic is isotropic if G and F satisfy G0 (G0 )T = I and F 0 (F 0 )T = I.
This means that the diffeomorphism is conformal and the push forward is simply
G = F.
(5.4)
Since all conformal mappings of the unit disk to itself belong to the family of Mobius transforms [48], F
must be of the form
F (z) = ei
za
,
1 az
(5.5)
where we associate R2 with the complex plane C. Note that the group of transformations (5.5) is extremely
rigid, its only degrees of freedom being the numerical parameters a and .
To use the full data discrete inversion procedure from section 4 we require that G maps the accessible
boundary segment BA = ei | [, ] to the whole boundary with the exception of one segment
between the equidistant measurement points k , k = (n + 1)/2, (n + 3)/2 as shown in Figure 9. This
determines completely the values of the parameters a and in (5.5) which in turn determine the mapping f
on the boundary. Thus, we have no further control over the positioning of the measurement points k = f (k ),
k = 1, . . . , n.
As shown in Figure 9 the lack of control over k leads to a grid that is highly non-uniform in angle. In
fact it is demonstrated in [15] that as n increases there is no asymptotic refinement of the grid away from
the center of BA , where the points accumulate. However, since the limit n is unattainable in practice
due to the severe ill-conditioning of the problem, the grids obtained by conformal mapping can still be useful
in practical inversion. We show reconstructions with these grids in section 5.3.
33
5.1.2
To overcome the issues with conformal mappings that arise due to the inherent rigidity of the group of
conformal automorphisms of the unit disk, we use here quasiconformal mappings. A quasiconformal mapping
F obeys a Beltrami equation in
F
F
= (z)
,
z
z
kk < 1,
(5.6)
with a Beltrami coefficient (z) that measures how much F differs from a conformal mapping. If 0,
then (5.6) reduces to the Cauchy-Riemann equation and F is conformal. The magnitude of also provides
a measure of the anisotropy of the push forward of by F . The definition of the anisotropy is
p
1 (z)/2 (z) 1
,
(F , z) = p
1 (z)/2 (z) + 1
(5.7)
where 1 (z), 2 (z) are the largest and the smallest eigenvalues of F respectively. The connection between
and is given by
(F , z) = |(z)|,
(5.8)
(F ) = sup (F , z) = kk .
(5.9)
Since the unknown conductivity is isotropic, we would like to minimize the amount of artificial anisotropy
that we introduce into the reconstruction by using F . This can be done with extremal quasiconformal mappings, which minimize kk under constraints that fix f = F |B , thus allowing us to control the positioning
For sufficiently regular boundary values f there exists a unique extremal quasiconformal mapping that
is known to be of a Teichm
uller type [63]. Its Beltrami coefficient satisfies
(z) = kk
(z)
,
|(z)|
(5.10)
for some holomorphic function (z) in . Similarly, we can define the Beltrami coefficient for G, using a
holomorphic function . It is established in [62] that F admits a decomposition
F = 1 AK ,
where
(z) =
Z p
(z)dz,
() =
(5.11)
Z p
()d,
(5.12)
34
(5.13)
AK
(5.14)
Since only the behavior of f at the measurement points k is of interest to us, it is possible to construct
explicitly the mappings and [15]. They are Schwartz-Christoffel conformal mappings of the unit disk to
polygons of special form, as shown in Figure 10. See [15, Section 3.4] for more details.
We demonstrate the behavior of the optimal grids under the extremal quasiconformal mappings in Figure
11. We present the results for two different values of the affine stretching constant K. As we increase the
amount of anisotropy from K = 0.8 to K = 0.66, the distribution of the grid nodes becomes more uniform.
The price to pay for this more uniform grid is an increased amount of artificial anisotropy, which may
detriment the quality of the reconstruction, as shown in the numerical examples in section 5.3.
35
v4
v3
v2
v1
v3
v4
v5
v2
v5
v6
v6
v1
v7
Figure 12: Pyramidal networks n for n = 6, 7. The boundary nodes vj , j = 1, . . . , n are indicated with
and the interior nodes with .
5.2
The limitations of the construction of the optimal grids with coordinate transformations can be attributed
to the fact that there is no non-singular mapping between the full boundary B and its proper subset BA .
Here we describe an alternative approach, that avoids these limitations by considering networks with dif-
ferent topologies, constructed specifically for the partial measurement setups. The one-sided case, with the
accessible boundary BA consisting of one connected segment, is in section 5.2.1. The two sided case, with BA
the union of two disjoint segments, is in section 5.2.2. The optimal grids are constructed using the sensitivity
analysis of the discrete and continuum problems, as explained in sections 4.2.2 and 5.2.3.
5.2.1
We consider here the case of BA consisting of one connected segment of the boundary. The goal is to choose
a topology of the resistor network based on the flow properties of the continuum partial data problem.
Explicitly, we observe that since the potential excitation is supported on BA , the resulting currents should
not penetrate deep into , away from BA . The currents are so small sufficiently far away from BA that in the
discrete (network) setting we can ask that there is no flow escaping the associated nodes. Therefore, these
nodes are interior ones. A suitable choice of networks that satisfy such conditions was proposed in [16]. We
call them pyramidal and denote their graphs by n , with n the number of boundary nodes.
We illustrate two pyramidal graphs in Figure 12, for n = 6 and 7. Note that it is not necessary that n
be odd for the pyramidal graphs n to be critical, as was the case in the previous sections. In what follows
we refer to the edges of n as vertical or horizontal according to their orientation in Figure 12. Unlike
the circular networks in which all the boundary nodes are in a sense adjacent, there is a gap between the
boundary nodes v1 and vn of a pyramidal network. This gap is formed by the bottommost n 2 interior
nodes that enforce the condition of zero normal flux, the approximation of the lack of penetration of currents
away from BA .
It is known from [23, 16] that the pyramidal networks are critical and thus uniquely recoverable from the
DtN map. Similar to the circular network case, pyramidal networks can be recovered using a layer peeling
algorithm in a finite number of algebraic operations. We recall such an algorithm below, from [16], in the
case of even n = 2m. A similar procedure can also be used for odd n.
36
Algorithm 1. To determine the conductance of the pyramidal network (n , ) from the DtN map (n) ,
perform the following steps:
(1) To compute the conductances of horizontal and vertical edges emanating from the boundary node vp ,
for each p = 1, . . . , 2m, define the following sets:
Z = {v1 , . . . , vp1 , vp+1 , . . . , vm }, C = {vm+2 , . . . , v2m },
(2) Compute the conductance (Ep,h ) of the horizontal edge emanating from vp using
(Ep,h ) =
(n)
p,H
(n)
p,C
(n)
Z,C
1
(n)
Z,H
1H ,
(5.15)
compute the conductance (Ep,v ) of the vertical edge emanating from vp using
1
(n)
(n)
(n)
(n)
(Ep,v ) = p,V p,C Z,C
Z,V 1V ,
(5.16)
P ((n) KBB ) PT
1
P KBS .
(5.17)
Here P R(n2)n is a projection operator: PPT = In2 , and K0SS is a part of KSS that only includes
the contributions from the edges connecting S to B.
37
Figure 13:
5.2.2
Two-sided network Tn for n = 10. Boundary nodes vj , j = 1, . . . , n are , interior nodes are .
We call the problem two-sided when the accessible boundary BA consists of two disjoint segments of B. A
suitable network topology for this setting was introduced in [17]. We call these networks two-sided and
denote their graphs by Tn , with n the number of boundary nodes assumed even n = 2m. Half of the nodes
are on one segment of the boundary and the other half on the other, as illustrated in Figure 13. Similar to
the one-sided case, the two groups of m boundary nodes are separated by the outermost interior nodes, which
model the lack of penetration of currents away from the accessible boundary segments. One can verify that
the two-sided network is critical, and thus it can be uniquely recovered from the DtN map by the Algorithm
2 introduced in [17].
When referring to either the horizontal or vertical edges of a two sided network, we use their orientation
in Figure 13.
Algorithm 2. To determine the conductance of the two-sided network (Tn , ) from the DtN map ,
perform the following steps:
(1) Peel the lower layer of horizontal resistors:
For p = m + 2, m + 3, . . . , 2m define the sets Z = {p + 1, p + 2, . . . , p + m 1} and C = {p 2, p
(5.18)
Assemble a symmetric tridiagonal matrix A with off-diagonal entries (Ep,p1,h ) and rows summing
to zero. Update the lower right m-by-m block of the DtN map by subtracting A from it.
(2) Let s = m 1.
(3) Peel the top and bottom layers of vertical resistors:
For p = 1, 2, . . . , 2m define the sets L = {p 1, p 2, . . . , p s} and R = {p + 1, p + 2, . . . , p + s}. If
38
p < m/2 for the top layer, or p > 3m/2 for the bottom layer, set Z = L, C = R. Otherwise let Z = R,
C = L. The conductance of the vertical edge emanating from vp is given by
(Ep,v ) = p,p p,C (Z,C )1 Z,p .
(5.19)
= D D ( + D)
D.
(5.20)
(Ep,p+1,h ) = p,H p,C (Z,C )1 Z,H 1,
(5.21)
Similar to Algorithm 1, Algorithm 2 is based on the construction of special solutions examined in [22, 24].
These solutions are designed to localize the flow on the outermost edges, whose conductance we determine
first. In particular, formulas (5.18) and (5.19) are known as the boundary edge and boundary spike
formulas [24, Corollaries 3.15 and 3.16].
5.2.3
The underlying grids of the pyramidal and two-sided networks are truly two dimensional, and they cannot
be constructed explicitly as in section 3 by reducing the problem to a one dimensional one. We define
the grids with the sensitivity function approach described in section 4.2.2. The computed sensitivity grid
points are presented in Figure 14, and we observe a few important properties. First, the neighboring points
corresponding to the same type of resistors (vertical or horizontal) form rather regular virtual quadrilaterals.
Second, the points corresponding to different types of resistors interlace in the sense of lying inside the
virtual quadrilaterals formed by the neighboring points of the other type. Finally, while there is some
refinement near the accessible boundary (more pronounced in the two-sided case), the grids remain quite
uniform throughout the covered portion of the domain.
39
Figure 14: Sensitivity optimal grids in the unit disk for the pyramidal network n (left) and the two-sided
network Tn (right) with n = 16. The accessible boundary segments BA are solid red. Blue correspond to
vertical edges, red F correspond to horizontal edges, measurement points are black .
Note from Figure 13 that the graph Tn lacks the upside-down symmetry. Thus, it is possible to come
up with two sets of optimal grid nodes, by fitting the measured DtN map Mn ( ) once with a two-sided
network and the second time with the network turned upside-down. This way the number of nodes in the
grid is essentially doubled, thus doubling the resolution of the reconstruction. However, this approach can
only improve resolution in the direction transversal to the depth, as shown in [17, Section 2.5].
5.3
Numerical results
We present in this section numerical reconstructions with partial boundary measurements. The reconstructions with the four methods from sections 5.1.1, 5.1.2, 5.2.1 and 5.2.2 are compared row by row in Figure
15. We use the same two test conductivities as in Figure 7(a). Each row in Figure 15 corresponds to one
method. For each test conductivity, we show first the piecewise linear interpolation of the entries returned
by the reconstruction mapping Qn , on the optimal grids (first and third column in Figure 15). Since these
grids do not cover the entire , we display the results only in the subset of populated by the grid points.
We also show the reconstructions after one-step of the Gauss-Newton iteration (4.11) (second and fourth
columns in Figure 15).
As expected, the reconstructions with the conformal mapping grids are the worst. The highly nonuniform conformal mapping grids cannot capture the details of the conductivities away from the middle of
the accessible boundary. The reconstructions with quasiconformal grids perform much better, capturing the
details of the conductivities much more uniformly throughout the domain. Although the piecewise linear
reconstructions Qn have slight distortions in the geometry, these distortions are later removed by the first
step of the Gauss-Newton iteration. The piecewise linear reconstructions with pyramidal and two-sided
networks avoid the geometrical distortions of the quasiconformal case, but they are also improved after one
step of the Gauss-Newton iteration.
Note that while the Gauss-Newton step improves the geometry of the reconstructions, it also introduces
40
Figure 15: Reconstructions with partial data. Same conductivities are used as in figure 7. Two leftmost
columns: smooth conductivity. Two rightmost columns: piecewise constant chest phantom. Columns 1
and 3: piecewise linear reconstructions. Columns 2 and 4: reconstructions after one step of Gauss-Newton
iteration (4.11). Rows from top to bottom: conformal mapping, quasiconformal mapping, pyramidal network,
two-sided network. Accessible boundary BA is solid red. Centers of supports of measurement (electrode)
functions q are .
41
some spurious oscillations. This is more pronounced for the piecewise constant conductivity phantom (fourth
column in Figure 15). To overcome this problem one may consider regularizing the Gauss-Newton iteration
(4.11) by adding a penalty term of some sort. For example, for the piecewise constant phantom, we could
penalize the total variation of the reconstruction, as was done in [13].
Summary
We presented a discrete approach to the numerical solution of the inverse problem of electrical impedance
tomography (EIT) in two dimensions. Due to the severe ill-posedness of the problem, it is desirable to
parametrize the unknown conductivity (x) with as few parameters as possible, while still capturing the
best attainable resolution of the reconstruction. To obtain such a parametrization, we used a discrete, model
reduction formulation of the problem. The discrete models are resistor networks with special graphs.
We described in detail the solvability of the model reduction problem. First, we showed that boundary
measurements of the continuum Dirichlet to Neumann (DtN) map for the unknown (x) define matrices
that belong to the set of discrete DtN maps for resistor networks. Second, we described the types of network
graphs appropriate for different measurement setups. By appropriate we mean those graphs that ensure
unique recoverability of the network from its DtN map. Third, we showed how to determine the networks.
We established that the key ingredient in the connection between the discrete model reduction problem
(inverse problem for the network) and the continuum EIT problem is the optimal grid. The name optimal
refers to the fact that finite volumes discretizations on these grids give spectrally accurate approximations
of the DtN map, the data in EIT. We defined reconstructions of the conductivity using the optimal grids,
and studied them in detail in three cases: (1) The case of layered media and full boundary measurements,
where the problem can be reduced to one dimension via Fourier transforms. (2) The case of two dimensional
media with measurement access to the entire boundary. (3) The case of two dimensional media with access
to a subset of the boundary.
We presented the available theory behind our inversion approach and illustrated its performance with
numerical simulations.
Acknowledgements
The work of L. Borcea was partially supported by the National Science Foundation grants DMS-0934594,
DMS-0907746 and by the Office of Naval Research grant N000140910290. The work of F. Guevara Vasquez
was partially supported by the National Science Foundation grant DMS-0934664. The work of A.V. Mamonov
was partially supported by the National Science Foundation grants DMS-0914465 and DMS-0914840. LB,
FGV and AVM were also partially supported by the National Science Foundation and the National Security
Agency, during the Fall 2010 special semester on Inverse Problems at MSRI, Berkeley, CA.
42
To understand definitions (3.8), recall Figure 1. Take for example the dual edge
i 12 ,j = (Pi 12 ,j 12 , Pi 21 ,j 12 ),
where Pi 12 ,j 21 = rbi (cos bj , sin bj ). We have from (2.5), and the change of variables to z(r) that
Z
(x)n(x) u(x)ds(x) =
bj+1
rbi1 (b
ri1 )
bj
i 1 ,j
2
u(b
ri1 , )
u(b
ri1 , j )
h (Ui1,j Ui,j )
d h
,
r
z
z(ri ) z(ri1 )
which gives the first equation in (3.8). Similarly, the flux across
i,j+ 21 = (Pi 21 ,j+ 12 , Pi+ 21 ,j+ 12 ),
is given by
Z
(x)n(x) u(x)ds(x)
Z
=
i,j+ 1
r
bi1
r
bi
Z
u(ri , bj+1 ) rbi1(r)
(r) u(r, bj+1 )
dr
dr
r
r
r
bi
(b
z (b
ri ) zb(b
ri1 ))
Ui,j+1 U (i, j)
,
h
Let us begin with the system of equations satisfied by the potential Vj , which we rewrite as
bj
= bj+1 +
bj+1 Vj+1 ,
b0
B ,
V`+1
0,
j = 0, 1, . . . `,
(2.1)
where we let
Vj = Vj+1 + j bj .
(2.2)
Combining the first equation in (2.2) with (2.2), we obtain the recursive relation
1
bj
=
Vj
j +
bj+1 +
bj+1
Vj+1
43
j = 1, 2, . . . , `,
(2.3)
(2.4)
The latter follows from the first equation in (2.1) evaluated at j = `, and boundary condition V`+1 = 0. We
obtain that
F () = V1 /B =
V1
V1
=
=
b0
b1 +
b1 V1
b1 +
(2.5)
b1
V1
To derive equation (3.47) let us begin with the Fourier series of the electrode functions
q () =
Cq (k)eik =
kZ
Cq (k)eik ,
(3.6)
kZ
where the bar denotes complex conjugate and the coefficients are
Cq () =
1
2
q ()eik d =
eikq
sinc
2
kh
2
.
(3.7)
Then, we have
Z
( )p,q
p () q ()d =
0
Cp (k)Cq
k,k0 Z
(k 0 )
2
kh
1 X ik(p q )
e
f (k 2 ) sinc
,
2
2
kZ
eik eik d
p 6= q.
(3.8)
( )p,q
q6=p
2 X
1 X ikp
kh
2
=
e
eikq .
f (k ) sinc
2
2
kZ
(3.9)
q6=p
But
X
q6=p
eikq =
n
X
ei
2k
n (q1)
q=1
eikp = eik(11/n)
sin(k)
eikp = nk,0 eikp .
sin(k/n)
(3.10)
Since f (0) = 0, we obtain from (3.9-3.12) that (3.8) holds for p = q, as well. This is the result (3.47).
Moreover, (3.48) follows from
2 X
n
n
X
1 X ik1 p
k1 h
eik p =
( )p,q eikq =
e
f (k12 ) sinc
ei(kk1 )q ,
2
2
q=1
q=1
k1 Z
44
(3.11)
n
X
ei(kk1 )q =
q=1
n
X
ei
2(kk1 )
(q1)
n
= nk,k1 .
(3.12)
q=1
Consider the case m1/2 = 1, where F () = 1/F () follows from (3.36). We rename the coefficients as
2j1 =
bj ,
2j = j ,
j = 1, . . . `,
(4.13)
F (x2 )
= 1 x +
x
2 x + . . .
2`1 x +
(4.14)
1
2` x
To determine j , for j = 1, . . . , 2`, we write first (4.14) as the ratio of two polynomials of x, P2` (x) and
Q2`1 (x) of degrees 2` and 2` 1 respectively, and seek their coefficients cj ,
c2` x2` + c2(`1) x2(`1) + . . . + c2 x2 + c0
P2` (x)
F (x2 )
=
=
.
x
Q2`1 (x)
c2`1 x2`1 + c2`3 x2`3 + . . . + c1 x
(4.15)
Now suppose that we have measurements of F at k = x2k , for k = 1, . . . , 2`, and introduce the notation
F (x2k )
= Dk .
xk
(4.16)
We obtain from (4.15) the following linear system of equations for the coefficients
P2` (xk ) Dk Q2`1 (xk ) = 0,
k = 1, . . . , 2`,
(4.17)
or in matrix form
D1 x1
D2 x2
D2` x2`
x21
x22
x22`
D1 x31
...
D2 x32
...
..
.
D2` x32`
...
D1 x2`1
1
D2 x2`1
2
2`1
D2` x2`
x2`
1
c1
x2`
2
c2
..
.
= 1,
x2`
2`
c2`
(4.18)
with right hand side a vector of all ones. The coefficients are obtained by inverting the Vandermonde-like
matrix in (4.18). In the special case of the rational interpolation (3.43), it is precisely a Vandermonde matrix.
Since the condition number of such matrices grows exponentially with their size [33], the determination of
{cj }j=1,...,2` is an ill-posed problem, as stated in Remark 1.
Once we have determined the polynomials P2` (x) and Q2`1 (x), we can obtain {j }j=1,...2` by Euclidean
45
2 x + . . .
3 x + . . .
2`1 x +
Q2`1 (x)
,
P2`2 (x)
1 x +
P2`2 (x)
P2` (x)
=
.
Q2`1 (x)
Q2`1 (x)
(4.19)
1
2` x
c2`
,
c2`1
(4.20)
c2j 1 c2j1 ,
e
c0
j = 1, . . . , ` 1,
(4.21)
= c0 .
(4.22)
Then, we proceed similarly to get 2 , and introduce a new polynomial Q2`3 (x) so that
1
3 x + . . .
4 x + . . .
2`1 x +
P2`2 (x)
,
Q2`3 (x)
2 x +
Q2`3 (x)
Q2`1 (x)
=
.
P2`2 (x)
P2`2 (x)
(4.23)
1
2` x
a1
b1
e
A=
..
.
b1
...
...
a2
..
.
b2
..
.
0
..
.
...
..
.
0
..
.
0
..
...
...
b`1
a`
(5.24)
where aj are the negative diagonal entries and bj the positive off-diagonal ones. Let also
= diag(12 , . . . , `2 )
(5.25)
46
(5.26)
e = (Y
e 1, . . . , Y
e ` ) is orthogonal
the eigenvectors. They are orthonormal and the matrix Y
eY
eT = Y
eTY
e = I.
Y
(5.27)
e = Y
e Y
e T or, equivalently,
The spectral theorem gives that A
eY
e = Y.
e
A
(5.28)
j = 1, . . . , `,
(5.29)
(5.30)
`
X
b1 Y1,j
=
b1
j=1
`
X
j = 1,
(5.31)
j=1
which determines
b1 , and we can set
W1 =
b1
p
1 , . . . ,
p
` .
(5.32)
(5.33)
`
X
j2 j ,
b1 = ka1 W1 W1 k,
j=1
(5.34)
and
W2 = b1
1 (a1 W1 W1 ) .
(5.35)
(5.36)
b2 = ka2 W2 W2 b1 W1 k.
47
(5.37)
Moreover,
W3 = b1
2 (a2 W2 W2 b1 W1 ) ,
(5.38)
can compute {j ,
bj }j=1,...,` . We already have from (5.31) that
b1 = 1/
`
X
j .
(5.39)
j=1
b1 1
bj
1
1
+
j
j1
,
bj =
1
.
bj
bj+1
(5.40)
(q)
To prove Lemma 3, let A(q) be the tridiagonal matrix with entries defined by {j ,
bj }j=1,...,` , like in
(o)
(o)
{j ,
bj }j=1,...,` ,
the uniqueness of solution of the inverse spectral problem and (3.82-3.83), the matrices A(q) and A(o) are
related by
v
v
v
v
u (o)
u (q)
u (q)
u (o)
u
u
u
u
b
b
b
b
diag t 1(o) , . . . , t `(o) A(q) diag t 1(q) , . . . , t `(q) = A(o) q I.
b1
b`
b1
b`
(q)
(o)
and Yj
(6.41)
respectively, related by
q
q
q
q
(q)
(q)
(o)
(o)
(o)
(q)
diag
b1 , . . . ,
b`
b1 , . . . ,
b`
Yj ,
Yj = diag
j = 1, . . . , `,
(6.42)
(q)
(o)
which gives
b1 =
b1
eY
eT
Y
11
(q)
b1
`
X
(q)
(o)
b1
`
X
(o) = 1,
(6.43)
j=1
j=1
1 =
(q)
b1
(o)
b1
= 1 = (q) (0).
(6.44)
Moreover, straightforward algebraic manipulations of the equations in (6.41) and definitions (3.84) give the
finite difference equations (3.85). .
48
(q)
bj
(q)
bj+1
(q)
()d =
p=1
j
X
bp(o) p(q)
p=1
j
X
(6.45)
p=1
Here we used the convergence result in Theorem 2 and denote by o(1) a negligible residual in the limit
` . We have
(q)
bj+1
(o)
bj+1
(q)
()d =
j
X
p=1
(o)
bj+1
(6.46)
and therefore
Z b(o)
j
j+1
X
b(q)
(o)
(q)
(o)
(q)
(o)
()d
bp (p ) + o(1),
j+1 bj+1 C
0
p=1
(6.47)
(o)
But the first term in the bound is just the error of the quadrature on the optimal grid, with nodes at j and
(o)
(o)
(o)
weights
b = b b , and it converges to zero by the properties of the optimal grid stated in Lemma 2
j
j+1
as ` ,
(6.48)
is similar. .
Perturbation analysis
It is shown in [14, Appendix B] that the skew-symmetric matrix B given in (3.93) has eigenvalues ij and
eigenvectors
T
1
Y(j ) = Y1 (j ), iYb1 (j ), . . . , Y` (j ), iYb` (j ) ,
2
(7.49)
where
1
1
T
(Y1 (j ), . . . , Y` (j )) = diag
b12 , . . . ,
b`2 Yj ,
T
Yj = (Y1,j , . . . , Y`,j )
G.1
T
1
1
b j , (7.50)
Yb1 (j ), . . . , Yb` (j )
= diag 12 , . . . , `2 Y
T
b j = Yb1,j , . . . , Yb`,j
are the eigenvectors of matrix A for eigenvalues j2 and Y
is
Yp+1,j Yp,j
Ybp,j =
.
j j
(7.51)
It is difficult to carry a precise perturbation analysis of the recursive Lanczos iteration that gives B from
the spectral data. We use instead the following discrete GelfandLevitan formulation due to Natterer [58].
Consider the reference matrix Br , for an arbitrary, but fixed r [0, 1], and define the lower triangular,
49
eT1 G = eT1 ,
(7.52)
where E = I e2` eT2` . Clearly, if B = Br , then G = Gr = I, the identity. In general G is lower triangular
and it is uniquely defined as shown with a Lanczos iteration argument in [14, Section 6.2].
Next, consider the initial value problem
EB() = iE(),
eT1 () = 1,
(7.53)
which has a unique solution () C2` , as shown in [14, Section 6.2]. When = j , one of the eigenvalues
of B, we have
2
(j ) =
Y(j ) =
Y1 (j )
2
Y(j ),
b 1 j
(7.54)
and (7.53) holds even for E replaced by the identity matrix. The analogue of (7.53) for Br is
EBr r () = iEr (),
eT1 r () = 1,
(7.55)
1 j `.
(7.56)
(7.57)
where is the matrix with columns (7.54), Y is the orthogonal matrix of eigenvectors of B with columns
(7.49), and S is the diagonal scaling matrix
r
S=
2
1/2 1/2
1/2 1/2
diag 1
, 1
, . . . , `
, `
.
b1
(7.58)
Then, letting
F = r S1 = GY
(7.59)
FF = GGT ,
(7.60)
where the bar denotes complex conjugate. Moreover, equation (7.52) gives
EBr F = EBr GY = EGBY = iEGYD = iEFD,
(7.61)
The discrete Gelfand-Levitans inversion method proceeds as follows: Start with a known reference
matrix Br , for some r [0, 1]. The usual choice is B0 = B(o) , the matrix corresponding to the constant
50
coefficient (o) 1. Determine r from (7.55), with a Lanczos iteration as explained in [14, Section 6.2].
Then, F = r S1 is determined by the spectral data jr and jr , for 1 j `. The matrix G is obtained
from (7.60) by a Cholesky factorization, and B follows by solving (7.52), using a Lanczos iteration.
G.2
Perturbation estimate
e = Sr + dS,
S
e = Y r + dY,
Y
e = Y r + dF,
F
(7.62)
with D, S, Y and F defined above. Note that Fr = Y r , because Gr = I. Substituting (7.62) in (7.61) and
using (7.55), we get
EBr dF = iE Y r dD + iE dF Dr .
Now multiply by Y r
(7.63)
EBr dW E dW Br = iE Y r dD Y r ,
satisfies
(7.64)
T
eT1 dF Y r
r
=
!
r
r
r
T
b 1 1
b1 1
b` `
b ` `
,d
,...,d
,d
Yr .
2
2
2
2
(7.65)
eT1 dG = 0.
(7.66)
dF Y r + Y r dF = dW + dW = dG + dGT .
(7.67)
Equations (7.64-(7.67) allow us to estimate dj /jr . Indeed, consider the j, j + 1 component in (7.66)
and use (7.67) and the structure of G, dG and Br to get
dj
= dGj+1,j+1 dGj,j = dWj+1,j+1 dWj,j ,
jr
j = 1, . . . , 2` 1.
(7.68)
The right hand side is given by the components of dW satisfying (7.64-7.65) and calculated explicitly in [14,
Appendix C] in terms of the eigenvalues and eigenvectors of Br . Then, the estimate
dj
r C1 dr
j
2`1
X
j=1
(7.69)
which is equivalent to (3.98) follows after some calculation given in [14, Section 6.3], using the assumptions
(o)
(o)
= rj and jr j
= rj .
References
[1] G. Alessandrini. Stable determination of conductivity by boundary measurements. Applicable Analysis,
27(1):153172, 1988.
[2] G. Alessandrini and S. Vessella. Lipschitz stability for the inverse conductivity problem. Advances in
Applied Mathematics, 35(2):207241, 2005.
[3] H.B. Ameur, G. Chavent, and J. Jaffre. Refinement and coarsening indicators for adaptive parametrization: application to the estimation of hydraulic transmissivities. Inverse Problems, 18:775, 2002.
[4] H.B. Ameur and B. Kaltenbacher. Regularization of parameter estimation by adaptive discretization
using refinement and coarsening indicators. JOURNAL OF INVERSE AND ILL POSED PROBLEMS,
10(6):561584, 2002.
[5] K. Astala, L. P
aiv
arinta, and M. Lassas. Calderons Inverse Problem for Anisotropic Conductivity in
the Plane. Communications in Partial Differential Equations, 30(1):207224, 2005.
[6] S. Asvadurov, V. Druskin, M.N. Guddati, and L. Knizhnerman. On optimal finite-difference approximation of PML. SIAM Journal on Numerical Analysis, 41(1):287305, 2004.
[7] S. Asvadurov, V. Druskin, and L. Knizhnerman. Application of the difference Gaussian rules to solution
of hyperbolic problems. Journal of Computational Physics, 158(1):116135, 2000.
[8] S. Asvadurov, V. Druskin, and S. Moskow. Optimal grids for anisotropic problems. Electronic Transactions on Numerical Analysis, 26:5581, 2007.
[9] J.A. Barcelo, T. Barcelo, and A. Ruiz. Stability of the inverse conductivity problem in the plane for
less regular conductivities. Journal of Differential Equations, 173(2):231270, 2001.
[10] O.D.
ered
Biesel,
Networks,
D.V.
the
Ingerman,
Discrete
J.A.
Morrow,
Laplacian,
and
and
a
W.T.
Continued
Shore.
Fraction
LayIdentity.
http://www.math.washington.edu/reu/papers/current/william/layered.pdf.
[11] L. Borcea. Electrical impedance tomography. Topical review. Inverse Problems, 18(6):99136, 2002.
[12] L. Borcea and V. Druskin. Optimal finite difference grids for direct and inverse Sturm-Liouville problems.
Inverse Problems, 18(4):9791002, 2002.
[13] L. Borcea, V. Druskin, and F. Guevara Vasquez. Electrical impedance tomography with resistor networks. Inverse Problems, 24(3):035013 (31pp), 2008.
[14] L. Borcea, V. Druskin, and L. Knizhnerman. On the Continuum Limit of a Discrete Inverse Spectral
Problem on Optimal Finite Difference Grids. Communications on Pure and Applied Mathematics,
58(9):1231, 2005.
[15] L. Borcea, V. Druskin, and A.V. Mamonov. Circular resistor networks for electrical impedance tomography with partial boundary measurements. Inverse Problems, 26(4):045010, 2010.
52
[16] L. Borcea, V. Druskin, A.V. Mamonov, and F. Guevara Vasquez. Pyramidal resistor networks for electrical impedance tomography with partial boundary measurements. Inverse Problems, 26(10):105009,
2010.
[17] L. Borcea, F. Guevara Vasquez, and A. V. Mamonov. Uncertainty quantification for electrical impedance
tomography with resistor networks. submitted to Inverse Problems. ArXiv:1105.1183v1 [math-ph].
[18] R.M. Brown and G. Uhlmann. Uniqueness in the inverse conductivity problem for nonsmooth conductivities in two dimensions . Commun. Partial Diff. Eqns, 22:100927, 1997.
[19] K. Chadan. An introduction to inverse scattering and inverse spectral problems. Society for Industrial
Mathematics, 1997.
[20] M.T. Chu and G.H. Golub. Structured inverse eigenvalue problems. Acta Numerica, 11(-1):171, 2002.
[21] C. F. Coleman and J. R. McLaughlin. Solution of the inverse spectral problem for an impedance with
integrable derivative, i, ii. Comm. Pure Appl. Math., 46(2):145212, 1993.
[22] E. Curtis, E. Mooers, and J.A. Morrow. Finding the conductors in circular networks from boundary
measurements. RAIRO - Mathematical Modelling and Numerical Analysis, 28:781814, 1994.
[23] E.B. Curtis, D. Ingerman, and J.A. Morrow. Circular planar graphs and resistor networks. Linear
Algebra and its Applications, 23:115150, 1998.
[24] E.B. Curtis and J.A. Morrow. Inverse problems for electrical networks. World Scientific, 2000.
[25] Y.C. de Verdi`ere. Reseaux electriques planaires I. Commentarii Mathematici Helvetici, 69(1):351374,
1994.
[26] Y.C. de Verdi`ere, I. Gitler, and D. Vertigan. Reseaux electriques planaires II. Commentarii Mathematici
Helvetici, 71(1):144167, 1996.
[27] V. Druskin. The unique solution of the inverse problem of electrical surveying and electrical well-logging
for piecewise-continuous conductivity. Izv. Earth Physics, 18:513, 1982.
[28] V. Druskin. On uniqueness of the determination of the three-dimensional underground structures from
surface measurements with variously positioned steady-state or monochromatic field sources. Sov. Phys.
Solid Earth, 21:2104, 1985.
[29] V. Druskin and L. Knizhnerman. Gaussian spectral rules for second order finite-difference schemes.
Numerical Algorithms, 25(1):139159, 2000.
[30] V. Druskin and L. Knizhnerman. Gaussian spectral rules for the three-point second differences: I. A
two-point positive definite problem in a semi-infinite domain. SIAM Journal on Numerical Analysis,
37(2):403422, 2000.
[31] V. Druskin and S. Moskow. Three-point finite-difference schemes, Pade and the spectral Galerkin
method. I. One-sided impedance approximation. Mathematics of Computation, 71(239):9951020, 2002.
53
[32] V. Druskin and S. Moskow. Three-point finite-difference schemes, Pade and the spectral Galerkin
method. I. One-sided impedance approximation. Mathematics of computation, 71(239):9951020, 2002.
[33] W. Gautschi and G. Inglese. Lower bounds for the condition number of vandermonde matrices. Numerische Mathematik, 52(3):241250, 1987.
[34] I.M. Gelfand and B.M. Levitan. On the determination of a differential equation from its spectral
function. Izvestiya Rossiiskoi Akademii Nauk. Seriya Matematicheskaya, 15(4):309360, 1951.
[35] I.M. Gelfand and B.M. Levitan. On the determination of a differential equation from its spectral
function. Izvestiya Rossiiskoi Akademii Nauk. Seriya Matematicheskaya, 15(4):309360, 1951.
[36] S. K. Godunov and V. S. Ryabenkii. The theory of difference schemes An introduction. North
Holland, Amsterdam, 1964.
[37] F. Guevara Vasquez. On the Parametrization of Ill-posed Inverse Problems Arising from Elliptic Partial
Differential Equations. PhD thesis, Rice University, Houston, TX, USA, 2006.
[38] H. Hochstadt. The inverse sturm-liouville problem. Communications on Pure and Applied Mathematics,
26(5-6):715729, 1973.
[39] O.Y. Imanuvilov, G. Uhlmann, and M. Yamamoto. Global uniqueness from partial Cauchy data in two
dimensions. Arxiv preprint arXiv:0810.2286, 2008.
[40] D. Ingerman. Discrete and continuous Dirichlet-to-Neumann maps in the layered case. SIAM Journal
on Mathematical Analysis, 31:12141234, 2000.
[41] D. Ingerman, V. Druskin, and L. Knizhnerman. Optimal finite difference grids and rational approximations of the square root I. Elliptic problems. Communications on Pure and Applied Mathematics,
53(8):10391066, 2000.
[42] D. Ingerman and J. A. Morrow. On a characterization of the kernel of the Dirichlet-to-Neumann map
for a planar region. SIAM Journal on Applied Mathematics, 29:106115, 1998.
[43] D. Isaacson. Distinguishability of conductivities by electric current computed tomography. IEEE
transactions on medical imaging, 5(2):9195, 1986.
[44] IS Kac and MG Krein. On the spectral functions of the string. Amer. Math. Soc. Transl, 103(2):19102,
1974.
[45] R. Kohn and M. Vogelius. Determining conductivity by boundary measurements. Communications on
Pure and Applied Mathematics, 37:28998, 1984.
[46] R. Kohn and M. Vogelius. Determining conductivity by boundary measurements II. Interior results.
Communications on Pure and Applied Mathematics, 38(5), 1985.
[47] S. Lang. Undergraduate algebra. Springer Verlag, 2005.
[48] M.A. Lavrentiev and B.V. Shabat. Methods of the complex variable function theory (in Russian). Nauka,
Moscow, 1987.
54
Resistor Networks and Optimal Grids for the Numerical Solution of Electrical
Impedance Tomography with Partial Boundary Measurements. PhD thesis, Rice University, Houston,
TX, USA, 2010.
[53] N. Mandache. Exponential instability in an inverse problem for the Schrodinger equation. Inverse
Problems, 17(5):14351444, 2001.
[54] V.A. Marchenko. Sturm-Liouville operators and applications. Chelsea Pub Co, 2011.
[55] J.R. McLaughlin and W. Rundell. A uniqueness theorem for an inverse SturmLiouville problem.
Journal of mathematical physics, 28:1471, 1987.
[56] A.I. Nachman. Global uniqueness for a two-dimensional inverse boundary value problem. Annals of
Mathematics, pages 7196, 1996.
[57] I. Natanson. Theory of functions of a real variable, volume 1. Ungar Pub Co, New York, 1961.
[58] F. Natterer. A discrete Gelfand-Levitan theory. Technical report, Technical report, Institut fuer Numerische und instrumentelle Mathematik, 1994.
[59] E.M. Nikishin and V.N. Sorokin. Rational approximations and orthogonality. Amer Mathematical
Society, 1991.
[60] J. P
oschel and E. Trubowitz. Inverse spectral theory. Pure and Applied Mathematics, volume 130.
Academic Press, Inc., Boston, MA, 1987.
[61] A. Quarteroni and A. Valli. Domain decomposition methods for partial differential equations. Oxford
University Press, USA, 1999.
[62] E. Reich. Quasiconformal mappings of the disk with given boundary values. Lecture Notes in Mathematics, 505:101137, 1976.
[63] K. Strebel. On the existence of extremal Teichm
uller mappings. Journal dAnalyse Mathematique,
30(1):464480, 1976.
[64] J. Sylvester. An anisotropic inverse boundary value problem. Communications on Pure and Applied
Mathematics, 43(2):201232, 1990.
[65] L.N. Trefethen and D. Bau. Numerical linear algebra. Number 50. Society for Industrial Mathematics,
1997.
55