The Existence of Generalized Inverses of Fuzzy Matrices
The Existence of Generalized Inverses of Fuzzy Matrices
The Existence of Generalized Inverses of Fuzzy Matrices
of fuzzy matrices
Abstract. In this paper we show that avery fuzzy matrix with entries in
a complete residuated lattice possess the generalized inverses of certain
types, and in particular, it possess the greatest generalized inverses of these
types. We also provide an iterative method for computing these greatest
generalized inverses, which terminates in a finite number of steps, for ex-
ample, for all fuzzy matrices with entries in a Heyting algebra. For other
types of generalized inverses we determine criteria for the existence, given
in terms of solvability of particular systems of linear matrix equations.
When these criteria are met, we prove that there is the greatest generalized
inverse of the given type and provide a direct method for its computing.
for all i [1, m], j [1, p]. It is important to point out that for arbitrary matrices
A, B Lmn , S Lkm and T Lnp the following is true
A 6 B SA 6 SB and AT 6 BT.
For any n N, by In we denote the identity matrix of size n. For a square matrix
A Lnn and arbitrary k N0 , the k-th power Ak of A is defined inductively, by
A0 = In and Ak+1 = Ak A, for each k N0 .
For fuzzy matrices A Lmn , B Lnp and C Lmp , the right residual of
C by A, denoted by A\C, and the left residual of C by B, denoted by C/B, are
fuzzy matrices in Lnp and Lmn , respectively, defined by
m
^ p
^
(A\C)( j, k) = A(s, j) C(s, k), (C/B)(i, j) = B( j, t) C(i, t),
s=1 t=1
for all i [1, m], j [1, n], k [1, p]. It can be easily verified that the following
residuation property
(1) AXA = A,
(2) XAX = X,
(3) (AX) = AX,
(4) (XA) = XA,
(5) AX = XA,
Theorem 1. For an arbitrary fuzzy matrix A the following statements are true
Proof. (a) Let A Lmn , for some m, n N, and let : Lnm Lnm be a map-
ping defined by (X) = XAX, for every X Lnm . Then is an isotone mapping
and the set of {2}-inverses of A is equal to the set of fixed points of . Since
Lnm is a complete lattice, by the Knaster-Tarski theorem (Theorem 12.2 [21]) we
obtain that there is the greatest fixed point of , i.e., the greatest {2}-inverse of A.
(b) We will prove the existence of the greatest {3, 4, 5}-inverse. All other cases
can be proved in the same way.
As already noted, A and X must be square matrices from Lnn , for some
n N. It is clear that (AX) = AX if and only if AX 6 (AX) , which is equivalent
to X 6 A\(AX) . In a similar way we show that the equation (XA) = XA is equiv-
alent to X 6 (XA) /A.
On the other hand, the equation (5) is equivalent to the system of inequalities
AX 6 XA and XA 6 AX, which are equivalent to X 6 A\(XA) and X 6 (AX)/A.
5
Therefore, the system consisting of equations (3), (4) and (5) is equivalent to
the system of inequalities
Then is an isotone mapping and the set of all {3, 4, 5}-inverses of A is the set
of all post-fixed points of , and again by the Knaster-Tarski theorem we obtain
that there exists the greatest post-fixed point of , i.e., there exists the greatest
{3, 4, 5}-inverse of A.
Let us note that, by the Knaster-Tarski theorem, the greatest fixed point of the
function defined in the proof (a) is also the greatest post-fixed point of this
function. Consequently, the previous theorem also provides a method for com-
puting the greatest {2}-inverse or the greatest -inverse, for {3, 4, 5}, based on
the Kleenes method for computing the greatest post-fixed point of an isotone
mapping on a complete lattice. Namely, for any isotone mapping of Lmn into
itself we define a sequence {Xk }kN of matrices inductively, as follows:
where 1 is the matrix whose all entries are 1 (the greatest matrix in Lmn ). If
there exists k N such that Xk = Xk+1 , then Xk = Xk+m , for each m N, and
Xk is the greatest post-fixed point of . In particular, this will happen whenever
is defined as in the proof of (a) of Theorem 1 and L is a Heyting algebra,
ukasiewicz or Godel structure. This will also happen whenever is defined
as in the proof of (b) and L is the Godel
structure.
Now we consider the equation (1). For the sake of simplicity, set A = A\A/A.
Theorem 4. The following statements for matrices A Lmn and B Lnp are equiv-
alent:
(i) there exists a {2}-inverse X of A such that C(X) = C(B);
(ii) there exists a solution to the equation BYAB = B, where Y is an unknown taking
values in Lpm ;
(iii) B 6 B(B\B/AB)AB.
If the the statements (i)(iii) are true, then
Proof. (i)(ii). Let X Lnm such that XAX = X and C(X) = C(B). Then X = BS
and B = XT, for some S Lpm and T Lmp , so B = XT = XAXT = BSAB. Thus,
S is a solution to the equation BYAB = B.
(ii)(i). Let S Lpm such that BSAB = B. Set X = BS. Then C(X) C(B) and
XAX = BSABS = BS = X.
Moreover, B = BSAB = XAB yields C(B) C(X), and hence, C(X) = C(B).
(ii)(iii). As in the proof of Theorem 2 we obtain that B\B/AB is the greatest
solution to the inequality BYAB 6 B. Therefore, if S is a solution to the equality
BYAB = B, then it is also a solution to the inequality BYAB 6 B, whence it
follows that S 6 B\B/AB. Consequently,
B = BSAB 6 B(B\B/AB)AB.
Using the previous theorem, we give the following characterization of {1, 4}-
inverses and {1, 2, 4}-inverses. The dual theorem can be stated and proved for
{1, 3}-inverses and {1, 2, 3}-inverses.
(vi) there exists a solution to the equation A YAA = A , where Y is an unknown taking
values in Lmm ;
(vii) there exists a solution to the equation ZAA = A , where Z is an unknown taking
values in Lnm ;
(viii) A 6 A (A \A /AA )AA ;
(ix) A 6 (A /AA )AA .
If the the statements (i)(ix) are true, then
A /AA is the greatest {1, 4}-inverse of A, and A (A \A /AA ) = (A /AA )A(A /AA )
is the greatest {1, 2, 4}-inverse of A.
Proof. (i)(ii). It is well-known that XAX A{1, 2, 4} for every X A{1, 4}.
(ii)(vi). Let X A{1, 2, 4}, i.e., XAX = X, AXA = A and (XA) = XA. Then
It is clear that A{1, 2}C(A), A{2}C(A ), , and by the proof of (vi)(iv) we obtain
The inverses considered in Theorem 8 are known in the literature as dual core
inverses, whereas their duals, obtained combining {1, 3}-invertibility and group
invertibility, are known as core inverses. Core and dual core inverses have been
studied in [1], in the context of complex matrices, and in [4, 5, 19, 24, 25], in the
context of involutive semigroups and rings.
Finally, we discuss the Moore-Penrose invertibility.
Theorem 9. The following statements for a matrix A Lmn are equivalent:
(i) A is MP-invertible;
(ii) A is {1, 3}-invertible and {1, 4}-invertible;
(iii) there exists a {2}-inverse X of A such that C(X) = C(A ) and R(X) = R(A );
(iv) there exists a solution to the equation A AA Y = A , where Y is an unknown taking
values in Lmm ;
(v) there exists a solution to the equation ZA AA = A , where Z is an unknown taking
values in Lnn ;
(vi) A 6 A AA (A AA \A );
(vii) A 6 (A /A AA )A AA ;
(viii) A 6 A A(A A\A ) (A /AA )AA .
If the the statements (i)(viii) are true, then
A = A (A AA \A ) = (A /A AA )A = (A A\A )A(A /AA ). (12)
Proof. The equivalence of (i) and (ii) is a well-known result. We only note that if
X A{1, 3} and Y A{1, 4}, then A = XAY, and by Theorem 5 and its dual it fol-
lows that A = (A A\A )A(A /AA ).
Next, (i)(iii) is also a well-known result (it can be easily verified), (iii)(ii)
and (ii)(viii) are immediate consequences of Theorem 5 and its dual, (iii)(iv)
and (iii)(v) follow directly by Theorem 6, and (iv)(vi) and (v)(vii) can be
proved in the same way as the corresponding parts of the previous theorems.
Therefore, it remains to prove (iv)(i), since (v)(i) can be proved in a simi-
lar way. Let us note that (iv)(i) was first proved by Crvenkovic in [6] (his proof
can also be found in [9] and [15]), and it was rediscovered independently in [25].
Here we give a simpler proof. Consider S Lmm such that A AA S = A , and
set X = A S. Then A AX = A , whence X A{1, 3}. Moreover, we have that
X = A S = (AXA) S = A X A S = A X X,
(XA) = (A X XA) = A X XA = XA,
XAX = (XA) X = A X X = X,
E = A A = A (A AA \A )A 6 A AA (A AA \A )A = A A.
for every k N. As known, any Heyting algebra is locally finite, i.e., every its
finitely generated subalgebra is finite. Therefore, the subalgebra of L generated
by all entries of the matrix A A is finite. Since all entries of matrices from the
sequence {(A A)k }kN are contained in this finite subalgebra, we conclude that
this sequence is finite, and the standard semigroup theoretical argument says
that there exists k N such that (A A)k is an idempotent (cf. Theorem 1.2.2 [14]
or Theorem 1.8 [2]). However, all powers of A A belong to the subgroup G whose
only idempotent is its identity E, so we have that E 6 A A 6 (A A)k = E, that is,
E = A A. By this it follows that
A = A AA = A AA = A ,
Let us note that the same proof is valid if L is an arbitrary distributive lattice.
References
1. Baksalary, O.M., Trenkler, G.: Core inverse of matrices, Linear Multilinear Algebra
58, 681697 (2010)
c, M., Popovic, Z.:
2. Bogdanovic, S., Ciri Semilattice Decompositions of Semigroups,
University of Nis, Faculty od Economics, 2011.
3. Cen, J-M.: Fuzzy matrix partial orderings and generalized inverses, Fuzzy Sets and
Systems 105, 453458 (1999)
4. Chen, J.L., Patricio, P., Zhang, Y.L., Zhu, H.H.: Characterizations and representa-
tions of core and dual core inverses, Canadian Mathematical Bulletin, http://dx.doi.
org/10.4153/CMB-2016-045-7
c, M., Stanimirovic, P., Ignjatovic, J.: Outer and inner inverses in semigroups
5. Ciri
belonging to the prescribed Greens equivalence classes, to appear.
14