The Existence of Generalized Inverses of Fuzzy Matrices

Download as pdf or txt
Download as pdf or txt
You are on page 1of 14

The existence of generalized inverses

of fuzzy matrices

c and Jelena Ignjatovic


Miroslav Ciri

University of Nis, Faculty of Sciences and Mathematics,


Visegradska 33, 18000 Nis, Serbia
miroslav.ciric@pmf.edu.rs, jelena.ignjatovic@pmf.edu.rs

Abstract. In this paper we show that avery fuzzy matrix with entries in
a complete residuated lattice possess the generalized inverses of certain
types, and in particular, it possess the greatest generalized inverses of these
types. We also provide an iterative method for computing these greatest
generalized inverses, which terminates in a finite number of steps, for ex-
ample, for all fuzzy matrices with entries in a Heyting algebra. For other
types of generalized inverses we determine criteria for the existence, given
in terms of solvability of particular systems of linear matrix equations.
When these criteria are met, we prove that there is the greatest generalized
inverse of the given type and provide a direct method for its computing.

1 Introduction and preliminaries


It is well-known that all systems composed of Moore-Penrose equations are solv-
able for matrices over a field of complex numbers. This implies the existence of
all types of generalized inverses defined by these systems, such as the 1-inverse,
outer inverse, reflexive 1-inverse, last-squares 1-inverse, minimum-norm 1-in-
verse, and Moore-Penrose inverse. Although the group inverse does not neces-
sarily exist, the Drazin inverse always exists. However, the situation is complet-
ely different when the generalized inverses are considered in the context of semi-
groups, the most general context in which they are studied. None of these types
of generalized inverses does not necessarily exist in a semigroup, or an involu-
tive semigroup.
The aim of this paper is to show that fuzzy matrices, with entries in an arbi-
trary complete residuated lattice, are somewhere between. It is easy to see that
fuzzy matrices always possess certain types of generalized inverses, such as gen-
eralized inverses defined by the equation (2), or those defined by some of the
equations (3), (4) and (5) given below. For example, the zero matrix is always
such a generalized inverse. However, we will show that fuzzy matrices also have
other inverses of these types, and in particular, we show that they possess the
greatest such inverses. The equation (1) behaves differently than others, and
those types of generalized inverses whose definitions include this equation do
not necessarily exist. Here, in Section 2, we determine criteria for the existence of
these types of generalized inverse, including the criteria for the existence of all
2

previously listed important types of generalized inverses. In addition, we pro-


vide methods for computing the greatest inverses of these types. The method is
iterative and does not necessarily terminate in a finite number of steps for every
fuzzy matrix, but it terminates, for example, for all fuzzy matrices with entries in
a Heyting algebra. To avoid this uncertain, and generally more complicated and
demanding procedure, in Section 3 we discuss the problem of representing gen-
eralized inverses as solution to certain equations of the form AXB = C, where A,
B and C are given matrices and X is an unknown matrix. We call them linear equa-
tions. We characterize numerous types of generalized inverses by linear equa-
tions, and using them, we determine the criteria of existence and provide direct
methods for computing the greatest inverses of these types that are generally
simpler than those presented in Section 2.
Notice that various types of generalized inverses of fuzzy matrices, mainly
those with entries in Heyting algebras, the Godel structure and Boolean matri-
ces, have been studied in [3, 7, 8, 11, 13, 1618, 23], but here we use an original
approach, different than the approaches that were used in the mentioned papers.
Throughout this paper, N will denote the set of all natural numbers, for any
n N we write [1, n] = {k N | 1 6 k 6 n}, and N0 = N {0}.
A residuated lattice is an algebra L = (L, , , , , 0, 1) such that
(L1) (L, , , 0, 1) is a lattice with the least element 0 and the greatest element 1,
(L2) (L, , 1) is a commutative monoid with the unit 1,
(L3) and satisfy the residuation property: for all x, y, z L,
x y 6 z x 6 y z.
In addition, if (L, , , 0, 1) is a complete lattice, then L is a complete residuated lat-
tice. A (complete) residuated lattice in which the operations and coincide is
called a (complete) Heyting algebra.
Important special types of complete residuated lattices, defined on the real
unit interval [0, 1] with x y = min(x, y) and x y = max(x, y), are the ukasiewicz
structure (x y = max(x + y 1, 0), x y = min(1 x + y, 1)), and the Godel
structure (x y = min(x, y), x y = 1 if x 6 y and = y otherwise).
For a complete residuated lattice L and m, n N, the set of all m n matrices
with entries in L will be denoted by Lmn . Such matrices will be called fuzzy
matrices. For a fuzzy matrix A Lmn and i [1, m], j [1, n], the (i, j)-entry of
A will be denoted by A(i, j). We say that two fuzzy matrices A and B are of the
same type if A, B Lmn , for some m, n N. Fuzzy matrices of the same type can
be ordered coordinatewise: for A, B Lmn , A 6 B if and only if A(i, j) 6 B(i, j),
for all i [1, m], j [1, n]. V
Endowed with this W ordering, Lmn forms a complete
lattice in which the meet iI Ai and the join iI Ai of a family {Ai }iI of fuzzy
matrices from Lmn are defined by
^  ^ _  _
Ai (i, j) = Ai (i, j), Ai (i, j) = Ai (i, j).
iI iI iI iI
mn
for all i [1, m], j [1, n]. For A L , the set (A] = {X Lmn | X 6 A} will be
called the down set determined by A.
3

The transpose of a fuzzy matrix A Lmn is a fuzzy matrix A Lnm defined


by A ( j, i) = A(i, j), for all i [1, m], j [1, n].
The product of two fuzzy matrices A Lmn and B Lnp is a fuzzy matrix
AB Lmp defined by
n
_
AB (i, j) = A(i, k) B(k, j),
k=1

for all i [1, m], j [1, p]. It is important to point out that for arbitrary matrices
A, B Lmn , S Lkm and T Lnp the following is true

A 6 B SA 6 SB and AT 6 BT.

For any n N, by In we denote the identity matrix of size n. For a square matrix
A Lnn and arbitrary k N0 , the k-th power Ak of A is defined inductively, by
A0 = In and Ak+1 = Ak A, for each k N0 .
For fuzzy matrices A Lmn , B Lnp and C Lmp , the right residual of
C by A, denoted by A\C, and the left residual of C by B, denoted by C/B, are
fuzzy matrices in Lnp and Lmn , respectively, defined by
m
^ p
^
(A\C)( j, k) = A(s, j) C(s, k), (C/B)(i, j) = B( j, t) C(i, t),
s=1 t=1

for all i [1, m], j [1, n], k [1, p]. It can be easily verified that the following
residuation property

AB 6 C A 6 C/B B 6 A\C, (1)

holds for arbitrary A Lmn , B Lnp and C Lmp .


Moreover, it is not hard to verify that (A\C)/B = A\(C/B), and this matrix
will be simply denoted by A\C/B. Also, (A\C) = C /A and (C/B) = B \C .

2 Solvability of systems composed of Moore-Penrose


equations

Let us consider the equations

(1) AXA = A,
(2) XAX = X,
(3) (AX) = AX,
(4) (XA) = XA,

where A Lmn is a given fuzzy matrix and X is an unknown fuzzy matrix


taking values in Lnm .
4

Moreover, let us consider the equation

(5) AX = XA,

where A Lnn is a given square fuzzy matrix and X is an unknown fuzzy


matrix taking values in Lnn . For any {1, 2, 3, 4, 5}, the system consisting
of the equations (i), for i , is denoted by (), and solutions to () are called
-inverses of A. The set of all -inverses of A will be denoted by A.
If the system () contains the equation (5), i.e. if 5 , it will be understood
that matrices A and X appearing in this system are square matrices from Lnn ,
for some n N.
Commonly, a {1}-inverse is called a 1-inverse (abbreviation for generalized
inverse) or an inner inverse, a {2}-inverse is an outer inverse, a {1, 2}-inverse is a
reflexive 1-inverse or a Thierrin-Vagner inverse, a {1, 3}-inverse is known as a last-
squares 1-inverse, a {1, 4}-inverse is a minimum-norm 1-inverse, a {1, 2, 3, 4}-inverse
is a Moore-Penrose inverse or shortly a MP-inverse of A, and a {1, 2, 5}-inverse is
known as a group inverse of A. If A has at least one -inverse, we say that it
is -invertible. In particular, an element having the MP-inverse is MP-invertible,
and an element having a group inverse is group invertible. An element having a
{1}-inverse is often called a regular element.
If they exist, the Moore-Penrose inverse and the group inverse of a matrix A
are unique, and they are denoted respectively by A and A# .
Note that the zero matrix (the matrix whose all entries are equal to 0) is a so-
lution to equations (2), (3), (4) and (5), as well as to any system composed of
some of these equations. However, our first theorem shows that these equa-
tions and related systems also have the greatest solutions, for an arbitrary fuzzy
matrix A.

Theorem 1. For an arbitrary fuzzy matrix A the following statements are true

(a) the matrix A has the greatest {2}-inverse;


(b) the matrix A has the greatest -inverse, for each {3, 4, 5}.

Proof. (a) Let A Lmn , for some m, n N, and let : Lnm Lnm be a map-
ping defined by (X) = XAX, for every X Lnm . Then is an isotone mapping
and the set of {2}-inverses of A is equal to the set of fixed points of . Since
Lnm is a complete lattice, by the Knaster-Tarski theorem (Theorem 12.2 [21]) we
obtain that there is the greatest fixed point of , i.e., the greatest {2}-inverse of A.
(b) We will prove the existence of the greatest {3, 4, 5}-inverse. All other cases
can be proved in the same way.
As already noted, A and X must be square matrices from Lnn , for some
n N. It is clear that (AX) = AX if and only if AX 6 (AX) , which is equivalent
to X 6 A\(AX) . In a similar way we show that the equation (XA) = XA is equiv-
alent to X 6 (XA) /A.
On the other hand, the equation (5) is equivalent to the system of inequalities
AX 6 XA and XA 6 AX, which are equivalent to X 6 A\(XA) and X 6 (AX)/A.
5

Therefore, the system consisting of equations (3), (4) and (5) is equivalent to
the system of inequalities

X 6 A\(AX) , X 6 (XA) /A, X 6 A\(XA), X 6 (AX)/A,

which is equivalent to the single inequality

X 6 A\(AX) (XA) /A A\(XA) (AX)/A.

Define now a mapping : Lnn Lnn by

(X) = A\(AX) (XA) /A A\(XA) (AX)/A.

Then is an isotone mapping and the set of all {3, 4, 5}-inverses of A is the set
of all post-fixed points of , and again by the Knaster-Tarski theorem we obtain
that there exists the greatest post-fixed point of , i.e., there exists the greatest
{3, 4, 5}-inverse of A.

Let us note that, by the Knaster-Tarski theorem, the greatest fixed point of the
function defined in the proof (a) is also the greatest post-fixed point of this
function. Consequently, the previous theorem also provides a method for com-
puting the greatest {2}-inverse or the greatest -inverse, for {3, 4, 5}, based on
the Kleenes method for computing the greatest post-fixed point of an isotone
mapping on a complete lattice. Namely, for any isotone mapping of Lmn into
itself we define a sequence {Xk }kN of matrices inductively, as follows:

X1 = (1), Xk+1 = (Xk ), for each k N,

where 1 is the matrix whose all entries are 1 (the greatest matrix in Lmn ). If
there exists k N such that Xk = Xk+1 , then Xk = Xk+m , for each m N, and
Xk is the greatest post-fixed point of . In particular, this will happen whenever
is defined as in the proof of (a) of Theorem 1 and L is a Heyting algebra,
ukasiewicz or Godel structure. This will also happen whenever is defined
as in the proof of (b) and L is the Godel
structure.
Now we consider the equation (1). For the sake of simplicity, set A = A\A/A.

Theorem 2. A fuzzy matrix A Lmn is {1}-invertible if and only if A A{1}.


If this is the case, A is the greatest {1}-inverse and A AA is the greatest {1, 2}-
inverse of A.

Proof. Clearly, if A A{1}, then A is {1}-invertible. Conversely, if A is {1}-inver-


tible and B A{1}, then B is a solution to the the inequality AXA 6 A. According
to the residuation property, A is the greatest solution to this inequality, whence
B 6 A . Now, A = ABA 6 AA A, and thus, A is the gratest {1}-inverse of A.
It is easy to check that A AA is a {1, 2}-inverse of A, and if B is an arbitrary
{1, 2}-inverse of A, then it is a {1}-inverse of A, whence B 6 A , and thus,
B = BAB 6 A AA . Therefore, A AA is the greatest {1, 2}-inverse of A.

6

For Boolean matrices, a similar characterization of {1}-inverses and {1, 2}-


inverses can be derived from a theorem concerning Boolean-valued relations,
proved by B. M. Schein in [22]. Note that for a Boolean matrix A we have that
A = (A Ac A )c , where Ac is a matrix obtained by replacing each entry in A by its
complement in the two-element Boolean algebra (replacing 1 by 0 and 0 by 1).
Theorem 3. Let {3, 4, 5}, 1 = {1} and 1,2 = {1, 2}, and let A be an
arbitrary fuzzy matrix with entries in L. Then the following statements are true:
(a) There exists the greatest -inverse G of A in the down-set (A ];
(b) If A 6 AGA, then G is the greatest 1 -inverse and GAG is the greatest 1,2 -inverse
of A;
(c) If A 6 AGA does not hold, then A does not have any 1 -inverse nor 1,2 -inverse.
Proof. (a) We will prove that there exists the greatest {3, 4, 5}-inverse G in (A ].
All other cases can be proved in the same way.
Since the equation (5) is included, we assume that A Lnn , for some n N.
According to Theorem 1, a matrix B Lnn is a solution to the system consisting
of equations (3), (4) and (5) and B 6 A if and only if
B 6 A\(AB) (BA) /A A\(BA) (AB)/A A .
Define now a mapping : Lnn Lnn by
(X) = A\(AX) (XA) /A A\(XA) (AX)/A A .
Then is an isotone mapping and the set of all {3, 4, 5}-inverses B of A contained
in (A ] is the set of all post-fixed points of , and by the Knaster-Tarski theorem
we obtain that there exists the greatest post-fixed point G of , and therefore, G
is the greatest {3, 4, 5}-inverse of A contained in (A ].
(b) By G 6 A it follows that AGA 6 AA A 6 A, and if A 6 AGA, then it is
clear that G is a 1 -inverse of A and it is easy to check that GAG is a 1,2 -inverse
of A. Since every 1 -inverse is also -inverse of A, and G is the greatest -inverse
of A, we conclude that G is the greatest 1 -inverse of A. On the other hand, if H
is an arbitrary 1,2 -inverse of A, then it is a -inverse of A, so H 6 G, and hence,
H = HAH 6 GAG, which means that GAG is the greatst 1,2 -inverse of A.
(c) As it was noted in the proof of (b), if H is an arbitrary 1 -inverse of A,
then H 6 G, whence A = AHA 6 AGA. Therefore, if A 6 AGA does not hold,
then A does not have any 1 -inverse nor 1,2 -inverse.

Corollary 1. Let A Lmn and let G be the greatest {3, 4}-inverse of A in the down-set
(A ]. Then the following statements are true:
(a) If A 6 AGA, then GAG is the Moore-Penrose inverse of A;
(b) If A 6 AGA does not hold, then A does not have a Moore-Penrose inverse.
Corollary 2. Let A Lnn and let G be the greatest {5}-inverse of A in the down-set
(A ]. Then the following statements are true:
(a) If A 6 AGA, then GAG is the group inverse of A;
(b) If A 6 AGA does not hold, then A does not have a group inverse.
7

3 Generalized inverses represented as solutions to systems


of linear equations
As we have seen, the equations discussed in the previous section can be classi-
fied into several categories. The equation (1) is a special case of a general linear
equation of the form AXB = C, where A, B and C are given, and X is an unknown
matrix. This equation does not necessarily have a solution, but there is a relative-
ly simple test of solvability, based on the computation of residuals, which also
computes the greatest solution if the equation is solvable. The equations (3), (4)
and (5) have a different form. In the literature, the equations of this form are
known as two-sided linear or bilinear. These equations, as well as systems com-
posed of them, always have the greatest solution, and this solution is computed
by means of an iterative procedure presented in Section 2. The equation (2) has a
different form than the equations (3), (4) and (5), but it is also solved by the same
iterative procedure, except in the case when it is combined with other equations,
when it requires a special treatment shown in the previous section. However, the
mentioned iterative procedure does not necessarily terminate in a finite number
of steps, and even when it terminates in a finite number of steps it is still more
complicated and demanding than the procedure for solving the equation (1)
and other linear equations. Therefore, the following question naturally arises:
whether other types of generalized inverses mentioned in Section 2, except the
{1}-inverse, can be computed by means of linear equations. In this section we
consider this question.
The column space C(A) of a matrix A Lmn is the span (set of all possible lin-
ear combinations) of its column vectors, and the row space R(A) of A is the span
of its row vectors. As known (see [16]), for A Lmn and B Lmp we have that

C(A) C(B) A = BY, for some Y Lpn , (2)


np pn
C(A) = C(B) AX = B and BY = A, for some X L , YL . (3)

Similarly, for A Lmn and B Lkn we have that

R(A) R(B) A = YB, for some Y Lmk , (4)


km mk
R(A) = R(B) XA = B and YB = A, for some X L , YL . (5)

An important problem that arises in the study of generalized inverses of com-


plex matrices is to find a {2}-inverse of a matrix with the prescribed column and
row spaces, i.e., a {2}-inverse X of a matrix A Cmn satisfying C(X) = C(B) or
R(X) = R(C), or both, for given matrices B Cnp and C Ckm . Here we con-
sider this problem for fuzzy matrices.
For given matrices A Lmn , B Lnp and C Lkm we set

A{2}C(B), = {X A{2} | C(X) = C(B)}, A{2},R(C) = {X A{2} | R(X) = R(C)},


A{2}C(B),R(C) = {X A{2} | C(X) = C(B), R(X) = R(C)}}.

First we consider {2}-inverses with the prescribed column space.


8

Theorem 4. The following statements for matrices A Lmn and B Lnp are equiv-
alent:
(i) there exists a {2}-inverse X of A such that C(X) = C(B);
(ii) there exists a solution to the equation BYAB = B, where Y is an unknown taking
values in Lpm ;
(iii) B 6 B(B\B/AB)AB.
If the the statements (i)(iii) are true, then

A{2}C(B), = {BS | S Lpm , BSAB = B}, (6)

and B(B\B/AB) is the greatest element of A{2}C(B),.

Proof. (i)(ii). Let X Lnm such that XAX = X and C(X) = C(B). Then X = BS
and B = XT, for some S Lpm and T Lmp , so B = XT = XAXT = BSAB. Thus,
S is a solution to the equation BYAB = B.
(ii)(i). Let S Lpm such that BSAB = B. Set X = BS. Then C(X) C(B) and

XAX = BSABS = BS = X.

Moreover, B = BSAB = XAB yields C(B) C(X), and hence, C(X) = C(B).
(ii)(iii). As in the proof of Theorem 2 we obtain that B\B/AB is the greatest
solution to the inequality BYAB 6 B. Therefore, if S is a solution to the equality
BYAB = B, then it is also a solution to the inequality BYAB 6 B, whence it
follows that S 6 B\B/AB. Consequently,

B = BSAB 6 B(B\B/AB)AB.

(iii)(ii). As we have already said, B\B/AB is a solution to the inequality


BYAB 6 B, i.e., B(B\B/AB)AB 6 B, and if B 6 B(B\B/AB)AB, this means that
B\B/AB is a solution to the equation BYAB = B, in which case it is the greatest
solution to this equation.
If the statements (i)(iii) are true, it follows directly from the proofs of (i)(ii)
and (ii)(i) that A{2}C(B), is the set of all matrices of the form BS, where S is a solu-
tion to the equation BYAB = B, and since B\B/AB is the greatest solution to this
equation, we conclude that B(B\B/AB) is the greatest element of A{2}C(B),.

Using the previous theorem, we give the following characterization of {1, 4}-
inverses and {1, 2, 4}-inverses. The dual theorem can be stated and proved for
{1, 3}-inverses and {1, 2, 3}-inverses.

Theorem 5. The following statements for a matrix A Lmn are equivalent:


(i) A is {1, 4}-invertible;
(ii) A is {1, 2, 4}-invertible;
(iii) there exists a {2}-inverse X of A such that C(X) = C(A );
(iv) there exists a {1, 2}-inverse X of A such that C(X) = C(A );
(v) there exists a {1}-inverse X of A such that C(X) C(A );
9

(vi) there exists a solution to the equation A YAA = A , where Y is an unknown taking
values in Lmm ;
(vii) there exists a solution to the equation ZAA = A , where Z is an unknown taking
values in Lnm ;
(viii) A 6 A (A \A /AA )AA ;
(ix) A 6 (A /AA )AA .
If the the statements (i)(ix) are true, then

A{1, 4} = {T Lnm | TAA = A }, (7)


mm
A{1, 2, 4} = A{2}C(A ), = A{1, 2}C(A), = {A S | S L , A SAA = A }, (8)
= {TAT | T Lnm , TAA = A },

A /AA is the greatest {1, 4}-inverse of A, and A (A \A /AA ) = (A /AA )A(A /AA )
is the greatest {1, 2, 4}-inverse of A.

Proof. (i)(ii). It is well-known that XAX A{1, 2, 4} for every X A{1, 4}.
(ii)(vi). Let X A{1, 2, 4}, i.e., XAX = X, AXA = A and (XA) = XA. Then

A = (AXA) = (XA) A = XAA = XAXAA = (XA) XAA = A X XAA ,

which means that X X is a solution to the equation A YAA = A .


(vi)(vii). This implication is evident.
(vii)(i). It is well-known that X A{1, 4} if and only if XAA = A (cf. [12]).
(vi)(iii) and (vi)(viii). This is an immediate consequence of Theorem 4.
(vii)(ix). This can be proved in the same way as (ii)(iii) and (iii)(ii) of
Theorem 4.
(iv)(v). This implication is obvious.
(v)(vi). Let X A{1} such that C(X) C(A ), i.e., let AXA = A and X = A S,
for some S Lmm . Then

A = (AXA) = (AA SA) = A S AA ,

which means that S is a solution to the equation A YAA = A .


(vi)(iv). Let S Lmm such that A SAA = A , and set X = A S. According
to the proof of (ii)(i) in Theorem 4 we obtain that X is a {2}-inverse of A such
that C(X) = C(A ), and by XAA = A it follows that X is a {1, 4}-inverse of A. This
proves that (iv) is true.
As we have already noted, X A{1, 4} if and only if XAA = A , which means
that (7) holds. Next, according to Theorem 4, we have that

A{2}C(A ), = {A S | S Lmm , A SAA = A }.

It is clear that A{1, 2}C(A), A{2}C(A ), , and by the proof of (vi)(iv) we obtain

A{2}C(A ), = {A S | S Lmm , A SAA = A } A{1, 2}C(A), ,

which means that A{2}C(A ), = A{1, 2}C(A), .


10

If X A{1, 2, 4}, then by the proof of (ii)(vi) it follows that X = A S, where


S = X X and A SAA = A , and if S Lmm such that A SAA = A and X = A S,
by the proof of (vi)(iv) we obtain that X A{1, 2, 4}. Thus, we have shown that
A{1, 2, 4} = {A S | S Lmm , A SAA = A }.
As in the proof of Theorem 4 we obtain that A /AA is the greatest element of
A{1, 4}, and A (A \A /AA ) is the greatest element of A{1, 2, 4}.
Finally, the last equality in (8) follows directly by the equality (7) and the fact
that A{1, 2, 4} = {XAX | X A{1, 4}}, which also implies that (A /AA )A(A /AA )
is the greatest element of A{1, 2, 4}.

Next we discuss {2}-inverses for which both the column space and the row
space are prescribed.
Theorem 6. The following statements for matrices A Lmn , B Lnp and C Lkm
are equivalent:
(i) there exists a {2}-inverse X of A such that C(X) = C(B) and R(X) = R(C);
(ii) there exist solutions to the equations CABY = C and ZCAB = B, where Y and Z
are unknowns taking values in Lpm and Lnk ;
(iii) C 6 CAB(CAB\C) and B 6 (B/CAB)CAB.
If the statements (i)(iii) are true, then there exists a unique X A{2}C(B),R(C), which
can be represented by
X = B(CAB\C) = (B/CAB)C. (9)
Proof. (i)(ii). This equivalence was proved in a more general context in [10] and
[5]. Here we give a sketch of the proof because of certain details that will be used
in the sequel.
If X A{2} such that C(X) = C(B) and R(X) = R(C), that is, if XAX = X,
X = BS, B = XU, X = TC and C = VX, for some S Lpm , U Lmp , T Lnk
and V Lkn , then
C = VX = VXAX = CAX = CABS and B = XU = XAXU = XAB = TCAB,
whence it follows that the equations CABY = C and ZCAB = B have solutions.
Conversely, if S Lpm and T Lnk such that CABS = C and TCAB = B,
then BS = TCABS = TC, and if we set X = BS = TC, then X A{2}, C(X) = C(B)
and R(X) = R(C).
(ii)(iii). As in Theorems 2 and 4 we show that the equations CABY = C and
ZCAB = B are solvable if and only if C 6 CAB(CAB\C) and B 6 (B/CAB)CAB,
and in this case CAB\C and B/CAB are their greatest solutions.
According to the proof of (i)(ii), there is a unique X A{2}C(B),R(C) , which
is represented by X = BS = TC, for arbitrary S Lpm and T Lnk such that
CABS = C and TCAB = B, and in particular, X = B(CAB\C) = (B/CAB)C.

Note that the inverses considered in the previous theorem are a special case of
the so-called (B, C)-inverses, which have been studied in a more general context
in [10] and [5].
Using Theorem 6 we give the following characterization of group invertibil-
ity and group inverses.
11

Theorem 7. The following statements for a matrix A Lnn are equivalent:


(i) A is group invertible;
(ii) there exists a {2}-inverse X of A such that C(X) = C(A) and R(X) = R(A);
(ii) there exist solutions to the equations A3 Y = A and ZA3 = A, where Y and Z are
unknowns taking values in Lnn .
(iii) A 6 A3 (A3 \A) (A/A3 )A3 .
If the statements (i)(iii) are true, then the group inverse of A can be represented by
A# = A(A3 \A) = (A/A3 )A. (10)
Proof. This follows immediately by Theorem 6.

Next we discuss a combination of {1, 4}-invertibility and group invertibility.
Theorem 8. The following statements for a matrix A Lmn are equivalent:
(i) A is {1, 4}-invertible and group invertible;
(ii) there exists a {2}-inverse X of A such that C(X) = C(A ) and R(X) = R(A);
(iii) there exist solutions to the equations A2 A Y = A and ZA2 A = A , where Y and Z
are unknowns taking values in Lmn and Lnm ;
(iv) A 6 A2 A (A2 A \A) and A 6 (A /A2 A )A2 A .
If the statements (i)(iv) hold, then the unique X A{2}C(A ),R(A) can be represented by
X = A (A2 A \A) = (A /A2 A )A. (11)
Proof. The equivalence of (ii), (iii) and (iv), the uniqueness of X A{2}C(A ),R(A) ,
and its representation (11) follow directly by Theorem 6.
The equivalence of (i) and (ii) was proved in [24] (see also [19, 4]), but here
we give a different proof.
(i)(iii). Let S A{1, 4} and T = A# . Then
A = A2 T = A(ASA)T = A2 (SA)T = A2 (SA) T = A2 A S T,
A = (ASA) = (SA) A = SAA = STA2 A ,

and hence, the equations A2 A Y = A and ZA2 A = A have solutions.


(ii)(i). Let (ii) hold, i.e., let X Lnm such that XAX = X, X = A S, A = XU,
X = TA and A = VX, for some S, U Lmm and T, V Lnn .
First, we have that A = XU = XAXU = XAA , whence X A{1, 4}. Next,
X = TA = TAXA = X2 A, A = VX = VXAX = A2 X,
and if we set Y = AX2 , then
AYA = A(AX2 )A = (A2 X)(XA) = AXA = A,
YAY = (AX2 )A(AX2 ) = A(X2 A)(AX2 ) = (AXA)X2 = AX2 = Y,
AY = A(AX2 ) = (A2 X)X = AX = AX2 A = YA,

which means that Y = A# .



12

The inverses considered in Theorem 8 are known in the literature as dual core
inverses, whereas their duals, obtained combining {1, 3}-invertibility and group
invertibility, are known as core inverses. Core and dual core inverses have been
studied in [1], in the context of complex matrices, and in [4, 5, 19, 24, 25], in the
context of involutive semigroups and rings.
Finally, we discuss the Moore-Penrose invertibility.
Theorem 9. The following statements for a matrix A Lmn are equivalent:
(i) A is MP-invertible;
(ii) A is {1, 3}-invertible and {1, 4}-invertible;
(iii) there exists a {2}-inverse X of A such that C(X) = C(A ) and R(X) = R(A );
(iv) there exists a solution to the equation A AA Y = A , where Y is an unknown taking
values in Lmm ;
(v) there exists a solution to the equation ZA AA = A , where Z is an unknown taking
values in Lnn ;
(vi) A 6 A AA (A AA \A );
(vii) A 6 (A /A AA )A AA ;
(viii) A 6 A A(A A\A ) (A /AA )AA .
If the the statements (i)(viii) are true, then
A = A (A AA \A ) = (A /A AA )A = (A A\A )A(A /AA ). (12)
Proof. The equivalence of (i) and (ii) is a well-known result. We only note that if
X A{1, 3} and Y A{1, 4}, then A = XAY, and by Theorem 5 and its dual it fol-
lows that A = (A A\A )A(A /AA ).
Next, (i)(iii) is also a well-known result (it can be easily verified), (iii)(ii)
and (ii)(viii) are immediate consequences of Theorem 5 and its dual, (iii)(iv)
and (iii)(v) follow directly by Theorem 6, and (iv)(vi) and (v)(vii) can be
proved in the same way as the corresponding parts of the previous theorems.
Therefore, it remains to prove (iv)(i), since (v)(i) can be proved in a simi-
lar way. Let us note that (iv)(i) was first proved by Crvenkovic in [6] (his proof
can also be found in [9] and [15]), and it was rediscovered independently in [25].
Here we give a simpler proof. Consider S Lmm such that A AA S = A , and
set X = A S. Then A AX = A , whence X A{1, 3}. Moreover, we have that
X = A S = (AXA) S = A X A S = A X X,
(XA) = (A X XA) = A X XA = XA,
XAX = (XA) X = A X X = X,

so X A{2, 4}. Therefore, X = A .


The representation A = A (A AA \A ) = (A /A AA )A is obtained directly
from Theorem 6.

As shown in [20], if a Boolean matrix A has the Moore-Penrose inverse A ,
then A = A . The same result has also been proved for matrices with entries in an
arbitrary Boolean algebra [18], matrices with entries in the Godel
structure [16]
and in a Brouwerian lattice [8]. The next theorem generalizes all these results.
13

Theorem 10. Let L be a Heyting algebra and A Lmn . Then A is MP-invertible if


and only if A AA 6 A . In this case, A = A .

Proof. Let A be MP-invertible and set X = A . It can be easily verified that


A A is group invertible with the group inverse XX , and that A A belongs to a
subgroup G of the semigroup of n n matrices over L whose identity is E = XA.
Due to the idempotency of the meet operation in the Heyting algebra L we
have that A 6 A AA , and according to Theorem 9,

E = A A = A (A AA \A )A 6 A AA (A AA \A )A = A A.

Further, by E 6 A A it follows that

E 6 A A 6 (A A)2 6 6 (A A)k 6 (A A)k+1 6 ,

for every k N. As known, any Heyting algebra is locally finite, i.e., every its
finitely generated subalgebra is finite. Therefore, the subalgebra of L generated
by all entries of the matrix A A is finite. Since all entries of matrices from the
sequence {(A A)k }kN are contained in this finite subalgebra, we conclude that
this sequence is finite, and the standard semigroup theoretical argument says
that there exists k N such that (A A)k is an idempotent (cf. Theorem 1.2.2 [14]
or Theorem 1.8 [2]). However, all powers of A A belong to the subgroup G whose
only idempotent is its identity E, so we have that E 6 A A 6 (A A)k = E, that is,
E = A A. By this it follows that

A = A AA = A AA = A ,

which clearly implies A AA = A .


Conversely, if A AA 6 A , then A AA = A (since A 6 A AA always holds),
which implies AA A = A. In addition, (A A) = A A and (AA ) = AA , and we
conclude that A = A .

Let us note that the same proof is valid if L is an arbitrary distributive lattice.

References
1. Baksalary, O.M., Trenkler, G.: Core inverse of matrices, Linear Multilinear Algebra
58, 681697 (2010)
c, M., Popovic, Z.:
2. Bogdanovic, S., Ciri Semilattice Decompositions of Semigroups,
University of Nis, Faculty od Economics, 2011.
3. Cen, J-M.: Fuzzy matrix partial orderings and generalized inverses, Fuzzy Sets and
Systems 105, 453458 (1999)
4. Chen, J.L., Patricio, P., Zhang, Y.L., Zhu, H.H.: Characterizations and representa-
tions of core and dual core inverses, Canadian Mathematical Bulletin, http://dx.doi.
org/10.4153/CMB-2016-045-7
c, M., Stanimirovic, P., Ignjatovic, J.: Outer and inner inverses in semigroups
5. Ciri
belonging to the prescribed Greens equivalence classes, to appear.
14

6. Crvenkovic, S.: On *-regular semigroups, Proceedings of the Third Algebraic Con-


ference, Beograd, pp. 5157, 1982
7. Cui-Kui, Z.: On matrix equations in a class of complete and completely distributive
lattices, Fuzzy Sets and Systems 22, 303320 (1987)
8. Cui-Kui, Z.: Inverses of L-fuzzy matrices, Fuzzy Sets and Systems 34, 103116 (1990)
9. Dolinka, I.: A characterization of groups in the class of *-regular semigroups, Novi
Sad Journal of Mathematics 29, 215219 (1999)
10. Drazin, M.P.: A class of outer generalized inverses, Linear Algebra and its Applica-
tions 436 (2012) 19091923.
11. Han, S.-C., Li, H.-X., Wang, J.-Y.: Resolution of matrix equations over arbitrary
Brouwerian lattices, Fuzzy Sets and Systems 159, 4046 (2008)
12. Hartwig, R.: Block generalized inverses, Archive for Rational Mechanics and Anal-
ysis 61, 197251 (1976)
13. Hashimoto, H.: Subinverses of fuzzy matrices, Fuzzy Sets and Systems 12, 155168
(1984)
14. Howie, J.M.: Fundamentals of Semigroup Theory, Clarendon Press, Oxford, 1995.
c, M.: Moore-Penrose equations in involutive residuated semi-
15. Ignjatovic, J., Ciri
groups and involutive quantales, Filomat, to appear.
16. Kim, K.H., Roush, F.W.: Generalized fuzzy matrices, Fuzzy Sets and Systems 4,
293315 (1980)
17. Pradhan, R., Pal, M.: Some results on generalized inverse of intuitionistic fuzzy
matrices, Fuzzy Information and Engineering 6, 133145 (2014)
18. Prasada Rao, P.S.S.N.V., Bhaskara Rao, K.P.S.: On generalized inverses of Boolean
matrices, Linear Algebra and its Applications 11, 135153 (1975)
19. Rakic, D.S., Dincic, N.C., Djordjevic, D.S.: Group, Moore-Penrose, core and dual core
inverse in rings with involution, Linear Algebra and its Applications 463, 115133
(2014)
20. Rao, C.R.: On generalized inverses of Boolean valued matrices, presented at the
Conference on Combinatorial Mathematics, Delhi, 1972.
21. Roman, S.: Lattices and Ordered Sets, Springer, New York, 2008.
22. Schein, B.: Regular elements of the semigroup of all binary relations, Semigroup
Forum 13, 95-102 (1976)
23. Wang, Z.-D.: T-type regular L-relations on a complete Brouwerian lattice, Fuzzy Sets
and Systems 145, 313322 (2004)
24. Xu, S.Z., Chen, J.L., Zhang, X.X.: New characterizations for core inverses in rings
with involution, arxiv:1512.08073v1.
25. Zhu, H., Chen, J., Patricio, P.: Further results on the inverse along an element in
semigroups and rings, Linear and Multilinear Algebra 64, 393403 (2016)

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy