Glasnik Matemati CKI Vol. 53 (73) (2018), 51 - 71: Key Words and Phrases
Glasnik Matemati CKI Vol. 53 (73) (2018), 51 - 71: Key Words and Phrases
Glasnik Matemati CKI Vol. 53 (73) (2018), 51 - 71: Key Words and Phrases
Vol. 53(73)(2018), 51 – 71
1. Introduction
The study of the class of totally positive matrices was initiated in the
1930s by F. R. Gantmacher and M. G. Krein ([6]). Also, an extensive study of
totally positive matrices is covered in S. Karlin ([10]) and Fallat and Johnson
([4]). Totally positive matrices are a class of matrices that is worth investi-
gating not only because of its mathematical beauty, but also because of their
myriad of applications. More specifically, they arise in many applications, to
name a few, statistics, approximation theory, operator theory, combinatorics,
and planar resistor network ([5, 8, 10]).
A matrix is said to be totally positive (totally nonnegative) if all its mi-
nors are positive (nonnegative). In this note, we focus our attention on totally
positive matrices, but the results can be extended to nonsingular totally non-
negative matrices. Factorizations of totally positive and totally nonnegative,
51
52 M. ELGEBALI AND N. EL-SISSI
and totally nonpositive matrices have been examined in [3,4,9] and [2], respec-
tively, to minimize the computation of minors while testing for total positivity.
More importantly, in [3, Lemma 5.1] and [7, Theorem 1] it was proved that
Since totally positive matrices have nonzero principal minors, the above
Theorem provides a unique LU factorization for totally positive matrices
where L is unit lower triangular. The Theorem also applies to invertible
totally nonnegative matrices due to [1, Corollary 3.8]. In turn, this Theorem
leads to a unique LDU factorization where both L and U are lower and upper
triangular matrices, respectively with unit diagonals and D a diagonal matrix
([7, Theorem 2]).
In [5], the authors define an essential planar network, Γ0 to be a directed
(from left to right) planar network whose edges can either be slanted or hori-
zontal in the middle of the network ([5, Figure 2]). All planar networks with n
sources and n sinks are numbered bottom-to-top by 1, 2, . . . , n, in [5]; whereas
in this note, all planar networks are numbered bottom-to-top by 0, 1, . . . , n−1;
by the same token, all n × n matrices discussed here have rows and columns
numbered from 0 to n − 1. In section 2, we give a detailed description of
the planar network used in this work in light of the Definition of Fomin and
Zelevinsky. This note adds to the combinatorial approach developed in [5] to
study the parametrization of totally positive matrices.
It has been proved in [5, Theorem 5] that n × n totally positive matrices
are parametrized by n2 positive parameters. More specifically, the parameters
are the weights of an essential planar network and the entries of a totally
positive matrix are recovered from these weights. In this note, we use these
n2 parameters to explicitly describe the entries of the matrices L, D, and U .
Furthermore, we introduce recursive formulae for the L, D, and U factors.
Using these formulae, we compute the entries of an n × n matrix given an
(n − 1) × (n − 1) matrix. More specifically, given LDU -factorization of an
(n − 1) × (n − 1) totally positive matrix, we are able to compute the entries
of the L, D and U of an n × n matrix. Also, we use the n2 parameters to
parametrize the inverse of a totally positive matrix.
In section 3, we obtain recursive formulae for computing the (n+1)×(n+1)
lower, Ln , and upper triangular, Un , matrices described in section 2. We then
provide closed-form formulae for the lower and upper triangular matrices in
light of their corresponding planar subnetworks.
In section 4, we provide a combinatorial description for computing the
inverses of Ln , Dn , and Un ; and hence the inverse of a totally positive matrix.
In addition, we obtain closed-form formulae for computing the entries of L−1 n ,
Dn−1 and Un−1 .
LDU-DECOMPOSITION 53
for all 0 ≤ i, j ≤ n, where the weights of the edges are defined as follows,
• For j 6= 0, ωLn ([(m + k, j), (m + k + 1, j − 1)]) = tj,k+j−n ,
• ωLn ([(m + k, j), (m + k + 1, j)]) = 1,
for all j, k = 0, 1, . . . , n.
Figure 1a shows an example of an L05 -path. It consists of only horizontal
and fall steps. Figure 1b shows another path which is not an L05 -path because
the fall step from (0, 4) to (1, 3) is not admissible by the last condition in
Definition 2.1.
4 4
3 3
2 2
1 1
1 2 3 4 5 1 2 3 4 5
2 2
t2,0 t2,1
1 0 0
1 1
t1,0
t1,0 1 0
0 0
t2,0 t1,0 t2,0 + t2,1 1
(a) L-type subnetwork of order 2 (b) The weight matrix L2
with weighting function ωLn
Figure 2. Description of L2
for all 0 ≤ i, j ≤ n, where the weight of the edges are defined as follows:
• For j 6= 0, ωUn ([(m + k, j), (m + k + 1, j + 1)]) = tj−k,j+1 ,
• ωUn ([(m + k, j), (m + k + 1, j)]) = 1,
for all j, k = 0, 1, . . . , n.
Figure 3 shows an example of a U60 -path. It consists of only horizontal
and rise steps.
1 2 3 4 5 6
The proofs of the following Lemma and Corollary are similar to Lemma 2.3
and Corollary 2.4.
Lemma 2.10. If [(xk , yk )]k=0,...,n is a Unm -path, then yk ≥ yl whenever
k > l.
Corollary 2.11. For all n ∈ N, the weight matrix of a U -type subnet-
work of order n is unit upper triangular.
In our context, we define an essential planar network of order n to be the
(ordered) concatenation of an L-type, a D-type and a U -type subnetworks
of order n. Equivalently, a path in our network is the concatenation of an
L0n , a Dnn and a Unn+1 paths, respectively. From this Definition it is clear
LDU-DECOMPOSITION 57
that our network is an acyclic directed (from left to right) planar graph. The
restrictions on the fall and rise steps given in Definitions 2.2 and 2.9 guarantee
that our slanted edges match the essential edges of the planar network defined
in [5]. This implies that our Definition of the essential planar network is
equivalent to the Definition of Fomin and Zelevinsky.
In this section, we constructed an Ln lower triangular matrix obtained
from an L-type subnetwork of order n, and similarly, the D−type and U -type
subnetworks were used to recover the entries of a diagonal matrix Dn and an
upper triangular matrix Un , respectively. In addition, we use the fact that
the concatenation of these networks is equivalent to computing the product
of their corresponding weight matrices ([4, 5]) to obtain the main Theorem of
this section.
The proof of this Theorem follows directly from [5, Theorem 5], Corollar-
ies 2.4, 2.11, Lemma 2.7 and the fact that the weight matrix corresponding
to the concatenated planar networks is the product of the weight matrices of
the original network ([4, 5]).
Figure 4 illustrates an example of an essential planar network of order 2
obtained by concatenating three essential planar subnetworks of order 2.
t2,2
2 2 2 2 2 2
t2,0 t2,1 t1,2 t0,2
t1,1
1 1 1 1 1 1
t1,0 t0,1
t0,0
0 0 0 0 0 0
(a) L-type subnetwork of (b) D-type subnetwork of (c) U -type subnetwork of
order 2 order 2 order 2
t2,2
2 2
t2,0 t2,1 t1,2 t0,2
t1,1
1 1
t1,0 t0,1
t0,0
0 0
(d) Essential planar network of order 2
Then, consider the product of the second row with the first column which
produce the following equation
L−1 ⊺
n+1 [n + 1, j]0≤j≤n Ln + Ln+1 [n + 1, j]0≤j≤n = 0n+1
Proof. If i < j, then QIi,j = φ. Thus, the sum is empty and Ln [i, j] = 0.
If i = j, the sum is over the empty sequence ε only. It follows that
i−1
Y
Ln [i, j] = tr+1,εi−r = 1
r=j
since it is an empty product. These two results agree with Corollary 2.4.
Finally, we need to show that (3.1) holds for the case i > j. The proof relies
Ln
on constructing a bijection between QIi,j and Pi,j such that if α 7→ p, then
Qi−1
ωLn (p) = r=j tr+1,αi−r . Define f (α) = [(k, yk )]0≤k≤n such that
i, if 0 ≤ k ≤ α1 + n − i
i − r, if αr + n − (i − r) ≤ k ≤ αr+1 + n − (i − r)
yk = .
for r ∈ {1, . . . , i − j − 1}
j, if αi−j + n − j ≤ k ≤ n
Once more, consider the set K = {k : k < n and yk+1 = yk − 1}. Let πk be
the edge [(k, yk ), (k + 1, yk+1 )] in f (α), for a fixed 0 ≤ k < n. It is clear from
Definition 2.2 that
tyk ,k+yk −n , if k ∈ K
ωLn (πk ) = ,
1, if k ∈
/K
Qn−1
since ωLn (f (α)) = k=0 ωLn (πk ), we deduce that
Y
ωLn (f (α)) = tyk ,k+yk −n .
k∈K
X X i−1
Y
Ln = ωLn (p) = tr+1,αi−r .
Ln
p∈Pi,j α∈QIi,j r=j
Proof. If we let
j, if j − α1 ≤ k ≤ n
yk = j − r, if j − r − αr+1 ≤ k ≤ j − r − αr for r ∈ {1, . . . , j − i − 1}
i, if 0 ≤ k ≤ i − αj−i
then a similar argument to the one in the proof of Theorem 3.4 is used.
for all 0 ≤ k, j ≤ n.
It follows from Corollary 2.4 that L−1
n is unit lower triangular. In this
section, a formula for the entries of L−1
n is obtained and we use it to show
that L−1
n is indeed the inverse of Ln .
Ln
Lemma 4.2. For all n ∈ N and 0 ≤ i, j ≤ n, let p ∈ Pi,j and π1 , π2 ∈ p
such that ω −1 (π1 ) = −ty1 ,s1 and ω −1 (π2 ) = −ty2 ,s2 where ω −1 is defined
Ln Ln Ln
as in Definition 4.1. If y1 < y2 , then s1 < s2 .
Proof. From Definition 4.1, we conclude that there are x1 , x2 ∈
{0, . . . , n − 1} such that π1 = [(m + x1 , y1 ), (m + x1 + 1, y1 − 1)] and
π2 = [(m + x2 , y2 ), (m + x2 + 1, y2 − 1)]. It follows from the same Defini-
tion that s1 = n − x1 − 1 and s2 = n − x2 − 1. Consequently, s1 − s2 = x2 − x1 .
Given that y1 < y2 and from Lemma 2.3, x1 > x2 , which means x2 − x1 < 0.
Therefore, s1 − s2 < 0.
LDU-DECOMPOSITION 63
Proof. If we let
i, if 0 ≤ k ≤ n − α1 − 1
yk = i − r, if n − αr ≤ k ≤ n − αr+1 − 1 for r ∈ {1, . . . , i − j − 1}
j, if n − αi−j ≤ k ≤ n
then a similar argument to the one in the proof of Theorem 3.4 is used.
Theorem 4.4. If Ln and L−1n are the weight matrices of the L-type sub-
network of order n whose weights are defined as in Definitions 2.2 and 4.1,
respectively, then Ln L−1
n = In+1 , the (n+1)×(n+1) identity matrix.
Proof. From Corollary 2.4, we know that both Ln and L−1 n are unit
lower triangular, and thus, their product is unit lower triangular as well. It
remains to show that the dot product of row i of Ln with column j of L−1 n is
0, whenever i > j. To show that, we let An = Ln L−1 n and expand the product
as
n
X
An [i, j] = Ln [i, k]Ln−1 [k, j]
k=0
where
X X Y Y
Tk = (−1)k−j tr,αi−r ts,βk−s .
α∈QIi,k β∈QSD
k,j
r∈{k...i−1} s∈{j...k−1}
64 M. ELGEBALI AND N. EL-SISSI
(4.2a)
Tk≤ =
(−1)k−j α∈QI
P P Q
i,k β∈QSD
k,j r∈{k...i−1} tr,αi−r
if j < k < i
αi−k ≤β1 Q
s∈{j...k−1} ts,βk−s , ,
Tk , if k = i
0, if k = j
(4.2b)
Tk> =
(−1)k−j α∈QI
P P Q
i,k
β∈QSD
k,j r∈{k...i−1} tr,αi−r
if j < k < i
αi−k >β1 Q
s∈{j...k−1} ts,βk−s , .
0, if k = i
Tk , if k = j
Clearly, Tk = Tk≤ + Tk> , where Tk≤ and Tk> have no common terms. It follows
from (4.2a) that An [i, j] = k=j (Tk≤ + Tk>). Rearranging the terms, the sum
Pi
becomes
i−1
An [i, j] = Tj≤ + ≤
X
(Tk> + Tk+1 ) + Ti> .
k=j
≤
which differs from Tk+1 by a factor of −1. Therefore, An = In+1 .
LDU-DECOMPOSITION 65
Then, similar to the case of L2 , we can now present an example (in Figure 5)
of an L-type subnetwork of order 2 with weights defined as in Definition 4.1,
and the weight matrix of that network, namely L−1 2 .
2 2
−t2,1 −t2,0
1 1 1 0 0
−t1,0 −t1,0 1 0
0 0 t2,1 t1,0 −(t2,0 + t2,1 ) 1
(a) L-type subnetwork of
(b) The weight matrix L−1
2
order 2 with weighting
function ω −1
Ln
for all 0 ≤ k, j ≤ n.
It follows from Corollary 2.11 that Un−1 is unit upper triangular. Conse-
quently, we obtain the following Theorem.
66 M. ELGEBALI AND N. EL-SISSI
Proof. If we let
j, if α1 + 1 ≤ k ≤ n
yk = j − r, if αr+1 + 1 ≤ k ≤ αr for r ∈ {1, . . . , j − i − 1}
i, if 0 ≤ k ≤ αj−i
then a similar argument to the one in the proof of Theorem 3.4 is used.
Theorem 4.7. If Un and Un−1 are the weight matrices of the U -type
subnetwork of order n whose weights are defined as in Definitions 2.9 and 4.5,
respectively, then Un Un−1 = In+1 , the (n+1)×(n+1) identity matrix.
Expanding the product of Un Un−1 as we did in the proof of Theorem 4.4,
proves this Theorem.
1
t2,2
2 2 2 2 2 2
−t0,2 −t1,2 1 −t2,1 −t2,0
t1,1
1 1 1 1 1 1
−t0,1 1 −t1,0
t0,0
0 0 0 0 0 0
(a) U -type subnetwork of (b) D-type subnetwork of (c) L-type subnetwork of
order 2 order 2 order 2
1
t0,0
0 0
−t0,1 1 −t1,0
t1,1
1 1
−t0,2 −t1,2 1 −t2,1 −t2,0
t2,2
2 2
(d) Planar network of order 2 for the inverse matrix
5. Numerical Examples
In this section, we illustrate the Theorems presented in the previous sec-
tions via numerical examples. In the first example, we compute the inverse
of A using the combinatorial description of the inverses of the L, D and U.
introduced in Section 4. Consider the 4 × 4 matrix A such that
3 6 18 306
3 8 26 454
A= 6 22 77 1384 .
5
3 3
10 7 13 9 6 17
1
2 2
2 3 1 3
2
1 1
1 2
3
0 0.
5
3 3 3 3 3 3
10 7 13 9 6 17
1
✷ ✷ 2 2
2 3 1 3
2
1 1 1 1 1 1
1 2
3
0 0 0 0 0 0
Ln = Dn = Un =
1 0 0 0 3 0 0 0 1 2 6 102
1 1 0 0 0 2 0 0 0 1 4 74
.
2 5 1 0 0 0 1 0 0 0 1 32
20 71 30 1 0 0 0 5 0 0 0 1
1
5
3 3 3 3 3 3
−13 −7 −10 −17 −6 −9
1
2 2 2 2 2 2
−3 −2 1 −3 −1
2
1 1 1 1 1 1
−1 1 −2
3
0 0 0 0 0 0
L−1
n =
−1
1 Dn = Un−1 =
1 0 0 0 3 0 0 0 1 −2 2 −18
−1 1 0 0 0 1 0 0 0 1 −4 54
2 .
3 −5 1 0 0 0 1 0 0 0 1 −32
−39 79 −30 1 0 0 0 15 0 0 0 1
LDU-DECOMPOSITION 69
Un−1 =
−1
1 Dn = Ln−1 =
1 −2 2 −18 3 0 0 0 1 0 0 0
0 1 −4 54 0 1 0 0 −1 1 0 0
2 .
0 0 1 −32 0 0 1 0 3 −5 1 0
0 0 0 1 0 0 0 15 −39 79 −30 1
Finally, we concatenate all three subnetworks yields a planar network from
which we recover the entries of the A−1 . For simplicity and conformity with
the structure of all planar networks described in this note, we flip the con-
catenated network vertically, that is, we number the sources and sinks from
top to bottom to obtain
1
3
0 0
−2 1 −1
2
1 1
−3 −1 −3 −2
1
2 2
−17 −6 −9 1 −13 −7 −10
5
3 3
References
[1] T. Ando, Totally positive matrices, Linear Algebra Appl. 90 (1987), 165–219.
[2] R. Cantó, P. Koev, B. Ricarte, and A. M. Urbano, LDU factorization of nonsingular
totally nonpositive matrices, SIAM J. Matrix Anal. Appl. 167 (2008), 777–782.
[3] C. W. Cryer, The LU-factorization of totally positive matrices, Linear Algebra and
Appl. 7 (1973), 83–92.
[4] S. M. Fallat and C. R. Johnson, Totally nonnegative matrices, Princeton University
Press, Princeton, 2011.
[5] S. Fomin and A. Zelevinsky, Total positivity: tests and parametrizations, Math. In-
telligencer 22 (2000), 23–33.
[6] F. R. Gantmacher and M. G. Krein, Sur les matrices complètement non négatives et
oscillatoires, Compositio Math. 4 (1937), 445–476.
[7] F. R. Gantmacher, The theory of matrices, Chelsea, New York, 1959.
[8] M. Gasca and C. A. Micchelli, Eds., Total positivity and its applications, Mathematics
and its Applications 359, Kluwer Academic Publishers, Dordrecht, The Netherlands,
1996.
[9] M. Gasca and J. M. Peña, On factorizations of totally positive matrices, in Total posi-
tivity and its applications, Kluwer Academic Publishers, Dordrecht, The Netherlands,
1996, 109–130.
[10] S. Karlin, Total positivity, Stanford University Press, Stanford, 1968.
M. ElGebali
Mathematics and Actuarial Science Department
The American University in Cairo
11 853 Cairo
Egypt
E-mail : m.elgebali@aucegypt.edu
N. El-Sissi
Mathematics and Actuarial Science Department
The American University in Cairo
11 853 Cairo
Egypt
E-mail : nelsissi@aucegypt.edu
Received: 12.12.2016.
Revised: 7.7.2017.