Adaptive Fuzzy Leader Clustering of Complex Data Sets in Pattern Recognition

Download as pdf or txt
Download as pdf or txt
You are on page 1of 7

794 IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 3, NO.

5, SEPTEMBER 1992

Adaptive Fuzzy Leader Clustering of Complex Data Sets in Pattern Recognition


Scott C. Newton, Surya Pemmaraju, and Sunanda Mitra

Abstruct- Most real data structures encountered in speech It is anticipated, however, that a valid fuzzy cluster measure
and image recognition and in medical and many other decision implemented in an unsupervised neural network architecture
making tasks are quite complex in nature and rather difficult could provide solutions to various real data clustering prob-
to organize for designing autonomous and optimal control and
recognitionsystems. This paper presents a modular, unsupervised lems. The present work describes an unsupervised neural
neural network architecture which can be used for clustering network architecture [18], [19] developed from the concept
and classificationof complex data sets. The adaptive fuzzy leader of ART-1 [5] while including a relocation of the cluster
clustering (AFLC) architecture is a hybrid neural-fuzzy system centers from FCM system equations for the centroid and the
which learns on-line in a stable and efficient manner. The system membership values [2]. Our adaptive fuzzy leader clustering
uses a control structure similar to that found in the adaptive
resonance theory (ART-1) network to identify the cluster centers (AFLC) system differs from other fuzzy ART-type clustering
initially. The initial classificationof an input takes place in a two- algorithms [20], [21] incorporating fuzzy min-max learning
stage process: a simple competitive stage and a distance metric rules. The AFLC presents a new approach to unsupervised
comparison stage. The cluster prototypes are then incrementally clustering, and has been shown to correctly classify a number
updated by relocatingthe centroid positions from fuzzy C-means of data sets, including the Iris data. This fuzzy modification
(FCM) system equations for the centroids and the membership
values. The operational characteristics of AFLC and the critical of an ART-1 type neural network, i.e., the AFLC system,
parameters involved in its operation are discussed. The perfor- allows classification of discrete or analog patterns without
mance of the AFLC algorithm is presented through application prior knowledge of the number of clusters in a data set. The
of the algorithm to the Anderson Iris data and laser-luminescent optimal number of clusters in many real data sets is, however,
fingerprint image data. The AFLC algorithm successfully clas- still dependent on the validity of the cluster measure, crisp
sifies features extracted from real data, discrete or continuous,
indicating the potential strength of this new clustering algorithm or fuzzy, employed for a particular data set. Section I1 of
in analyzing complex data sets. this paper describes the new AFLC system and algorithm.
Section 111 discusses a number of aspects and the potential
strength of the AFLC architecture and operation in clustering
I. INTRODUCTION complex data sets. Section IV presents the test results of AFLC
operation on the data sets used. Finally, Section V contains
C LUSTER analysis has been a significant research area
in pattern recognition for a number of years [1]-[4].
Since clustering techniques are applied to the unsupervised
additional remarks on operation and future applications of the
AFLC algorithm.
classification of pattern features, a neural network of the
adaptive resonance theory (ART) type [5], [6] appears to 11. ADAPTIVE FUZZYLEADERCLUSTERING
be an appropriate candidate for implementation of clustering SYSTEM AND ALGORITHM
algorithms [7]- [lo]. Clustering algorithms generally operate
by optimizing some measures of similarity. Classical, or crisp,
A. AFLC System and Algorithm Overview
clustering algorithms such as ISODATA [ll]partition the data
such that each sample is assigned to one and only one cluster. AFLC is a hybrid neural-fuzzy system which can be used
Often with data analysis it is desirable to allow membership of to learn cluster structure embedded in complex data sets, in a
a data sample in more than one class and also to have a degree self-organizing, stable manner. This system has been adapted
of belief that the sample belongs to each class. The application from the concepts of the ART-1 structure, which is limited
of fuzzy set theory [12] to classical clustering algorithms has to binary input vectors [5]. Pattern classification in ART-1 is
resulted in a number of algorithms [13]-[16] with improved achieved by assigning a prototype vector to each cluster that
performance since unequivocal membership assignment is is incrementally updated [lo].
avoided. However, estimating the optimum number of clusters Let X j = {Xjl,Xjz,...,Xjp} be the j t h input vector
in any real data set remains a difficult problem [17]. for 1 5 j 5 N , where N is the total number of samples
in the data set and p is the dimension of the input vectors.
Manuscript received June 24, 1991; revised November 24, 1991. This work The initialization and updating procedures in ART- 1 involve
was supported in part by NASA-JSC under Contract NAG9-509, by a grant similarity measures between the bottom-up weights ( b k i where
from the Northrop Corporation under Contract HHH7800809K, and by Texas IC = 1 , 2 , . . . , p ) and the input vector (Xj),and a verification
Advanced Technology Program under Grant 003644-047. Support was also
received in the form of a Texas Instruments Graduate Student Fellowship to of X j belonging to the ith cluster by matching of the top-
the first author through Texas Tech University. down weights ( t i k ) with X j . For continuous-valued features,
The authors are with the Computer Vision and Image Analysis Lab, the above procedure is changed as in ART-2 [ll]. However if
Department of Electrical Engineering, Texas Tech University, Lubbock, TX
79409. the ART-type networks are not made to represent biological
IEEE Log Number 9201455. networks, then a greater flexibility is allowed in the choice
1045-9227/92$03.00 0 1992 IEEE
IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 3, NO. 5, SEPTEMBER 1992 795

Recognition Layer Recogni on Layer


/ 2
T
0Reset -
Start new
z

J cluster prototype
and make Yi
inactive if

Comparison Layer Comparison Layer

rc

2
0

0
,s 2 2s I I t . .6 I I t

Tau (T)
e t s 7 7 6 I
,~
-e-
__m-
Fingerprint Data
Anderson Iris Data I
(4

Fig. 1. Operation characteristics of AFLC architecture. (a) Initial stage of identifying a cluster prototype. (b) The comparison stage using the criterion of
Euclidian distance ratio R > T to reject new data samples to the cluster prototype. The reset control implies the deactivation of the original prototype and
activation of a new cluster prototype. (c) The T - c graph for choosing T for unlabeled data sets.

of similarity metric. A choice of Euclidean metric is made in of feature vectors employing an on-line learning scheme.
developing the AFLC system while keeping a simple control Fig. l(a) shows a p-dimensional discrete or analog-valued
structure adapted from ART-1. input feature vector, X, to the AFLC system. The system is
Parts (a) and (b) of Fig. 1 represent the AFLC system made up of the comparison layer, the recognition layer, and
and operation for initialization and comparison of cluster the surrounding control logic. The AFLC algorithm initially
prototypes from input feature vectors, which may be discrete or starts with the number of clusters (C) set to zero. The system
analog. The updating procedure in the AFLC system involves is initialized with the input of the first feature vector X . As is
relocation of the cluster prototypes by incremental updating to leader clustering, this first input is said to be the prototype
of the centroids vi (the cluster prototypes) using a hybrid for the first cluster. The normalized input feature vector is
equation based on partial FCM updating of the FCM equations then applied to the bottom-up weights in a simple competitive
[2] as given below. where N; is the number of samples in learning scheme, or dot product. The node that receives the
cluster i and C is the number of clusters. Equation (2) is a full largest input activation Y is chosen as the prototype vector,
membership update by FCM; equation (1) updates vi only in in the original ART-1:
the columns currently associated with class i. The summation
in (1)would extend from 1 to N in full FCM updating of the
class prototypes.
As described here, AFLC is primarily used as a classifier

j=1

k=l \JIx,--v,,))2)
796 IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 3, NO. 5 , SEPTEMBER 1992

Therefore the recognition layer serves to initially classify an Adaptive Fuzzy Leader Clustering

input. This first stage classification activates the prototype or


top-down expectation ( t z k ) for a cluster, which is forwarded to
the comparison layer. The comparison layer serves both as a
fan-out site for the inputs and as the location of the comparison
between the top-down expectation and the input. The control
logic with an input enable command allows the comparison
layer to accept a new input as long as a comparison operation is
not currently being processed. The control logic with compare
Winning class centroid compared with input.
imperative command disables the acceptance of new input and A ratio of average euclidean distance for all
initiates comparison between the cluster prototype associated samples of the class to the euclidean distance
with Y,, i.e., the centroid U, and the current input vector, using
(4). The reset signal is activated when a mismatch of the first
and second input vectors occurs according to the criterion of
a distance ratio threshold as expressed by

k=l

where IC = 1. . . Ni is the number of samples in class i and


d m is the Euclidean distance as indicated in

normalized version of class


neuron with current
If the ratio R is less than a user-specified threshold T, then
the input is found to belong to the cluster originally activated Fig. 2. Flowchart of the AFLC algorithm
by the simple competition. The choice of the value of 7 is
critical and is found by a number of initial runs. Preliminary
runs with r varying over a range of values yield a good 5 ) Otherwise, update the winner cluster prototype asso-
estimate of the possible number of clusters in unlabeled data ciated with Y , by calculating the new centroid and
sets. membership values using (1) and (2). Output the index
When an input is classified as belonging to an existing clus- of Y,.Go to step 2.
ter, it is necessary to update the expectation (prototype) and A flow chart of the algorithm is shown in Fig. 2.
the bottom-up weights associated with that cluster. First, the
degree of membership X to the winning cluster is calculated. 111. OPERATIONAL CHARACTERISTICS OF AFLC
This degree of membership, p, gives an indication, based on
the current state of the system, of how heavily X should be A. Match-Based Learning and the Search
weighted in the recalculation of the class expectation. The In match-based learning, a new input is learned only af-
cluster prototype is then recalculated as a weighted average ter being classified as belonging to a particular class. This
of all the elements within the cluster. The update rules are as process ensures stable and consistent learning of new inputs
follows: the membership value p,, of the current input sample by updating parameters only for the winning cluster and only
X, in the winning class i is calculated using (2), and then the after classification has occurred. This differs from error-based
new cluster centroid for cluster z is generated using (1). As learning schemes, such as back-propagation of error, where
with the FCM, m is a parameter which defines the fuzziness new inputs are effectively averaged with old learning resulting
of the results and is normally set between 1.5 and 30. For the in forgetting and possibly oscillatory weight changes. In [5]
following applications, m was experimentally set to 2. match-based learning is referred to as resonance, hence the
The AFLC algorithm can be summarized by the following name adaptive resonance theory.
steps: Because of its ART-like control structure, AFLC is capable
1) Start with no cluster prototypes C = 0. of implementing a parallel search when the distance ratio does
2) Let X, be the next input vector. not satisfy the thresholding criterion. The search is arbitrated
3) Find the first stage winner, Y,, as the cluster prototype by appropriate control logic surrounding the comparison and
with the maximum dot product. recognition layers of Fig. 1. This type of search is necessary
4) If no Y , satisfies the distance ratio criterion, then create owing to the incompleteness of the classification at the first
a new cluster and make its prototype vector be equal to stage. For illustration, consider the two vectors ( 1 , l ) and (5,5).
X,. Output the index of the new cluster. Both possess the same unit vector. Since the competition in
IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 3, NO. 5, SEPTEMBER 1992 797

the bottom-up direction consists of measuring how well the D. The Fuzzy Learning Rule
normalized input matches the weight vector for each class z, In general, the AFLC architecture allows learning of even
these inputs would both excite the same activation pattern in rare events. Use of the fuzzy learning rule in the form of
the recognition layer. In operation, the comparison layer serves (1) and (2) maintains this characteristic. In weighted rapid
to test the hypothesis returned by the competition performed learning [5],the learning time is much shorter than the entire
at the recognition layer. If the hypothesis is disconfirmed by processing time and the adaptive weights are allowed to reach
the comparison layer, i.e., R > T , then the search phase equilibrium on each presentation of an input, but here the
continues until the correct cluster is found or another cluster is amount of change in the prototype is a function of the input
created. Normalization of the input vectors (features) is done and its fuzzy membership value ( p Z 3 ) Noisy
. features which
only in the recognition layer for finding the winning node. would normally degrade the validity of the class prototype
This normalization is essential to avoid large values of the are assigned low weights to reduce the undesired effect. In
dot products of the input features and the bottom-up weights the presence of class outliers, assigning low memberships to
and also to avoid initial misclassification arising from large the outliers leads to correct classification. Normalization of
variations in magnitudes of the cluster prototypes. The search membership is not involved in this process. However, a new
process, however, renormalizes only the centroid and not the cluster of outliers only can be formed during the search process
input vectors again. [22]. Development of such outlier/noise cluster in AFLC is
currently in progress.
B. Determining the Number of Output Classes Weighted rapid learning also tends to reinforce the decision
to append a new cluster. This is due to the fact that, by
AFLC utilizes a dynamic, self-organizing structure to learn definition, the first input to be assigned to a node serves as that
the characteristics of the input data. As a result, it is not node’s first prototype; therefore, that sample has a membership
necessary to know the number of clusters apriori; new clusters value of 1. Future inputs are then weighted by how well they
are added to the system as needed. This characteristic is match the prototype. Although the prototype does change over
necessary for autonomous behavior in practical situations in time, as described in the algorithm, each sample retains its
which nonlinearities and nonstationarity are found. weight, which tends to limit changes in the current prototype.
Clusters are formed and trained, on-line, according to the Thus the clusters possess a type of intertia which tends to
search and learning algorithms. Several factors affect the stabilize the system by making it more difficult for a cluster
number, size, shape, and location of the clusters formed in to radically change its prototype in the feature space.
the feature space. Although it is not necessary to know the Finally, the fuzzy learning rule is stable in the sense that the
number of clusters which actually exist in the data, the number adaptive weights represent a normalized version of the cluster
of clusters formed will depend upon the value of T . A low centroid, or prototype. As such, these weights are bounded on
threshold value will result in the formation of more clusters [0,1] and are guaranteed not to approach infinity.
because it will be more difficult for an input to meet the
classification criteria. A high value of r will result in fewer, E. AFLC as a General Architecture
less dense cluster. For data structures having overlapping
As with most other clustering algorithms, the size and shape
clusters, the choice of r is critical for correct classification
of the resultant clusters depend on the metric used. The use
whereas for nonoverlapping cluster data, the sensitivity of T
is not a significant issue. In the latter case the value of T may of any metric will tend to influence the data toward a solution
which meets the criteria for that metric, not necessarily to
vary over a certain range, but still yield correct classification.
Therefore the sensitivity of T is highly dependent on specific the best solution for the data. This statement implies that
data structure, as shown in Fig. l(c). The relationship between some metrics are better for certain problems than are others.
The use of a Euclidean metric is convenient, but displays the
r and the optimal number of clusters in a data set is currently
immediate problem that it is best suited to simple circular
being studied.
cluster shapes. The use of the Mahalanobis distance accounts
for some variations in cluster shape, but its nonlinearity serves
C. Dynamic Cluster Sizing to place constraints on the stability of its results. Also, as with
other metrics, the Euclidean and Mahalanobis distance metrics
As described earlier, r is compared with a ratio of vector
lose meaning in an anisotropic space.
norms. The average distance parameter for a cluster is re-
calculated after the addition of a new input to that cluster;
therefore, this ratio ( R ) represents a dynamic description of I v . TESTS AND RESULTS:FEATUREVECTOR CLASSIFICATION
the cluster. If the inputs are dense around the cluster prototype,
then the size of the cluster will decrease, resulting in a more A. Clustering of the Anderson Iris Data
stringent condition for membership of future inputs to that The Anderson Iris data set [23], consists of 150 four-
class. If the inputs are widely grouped around the cluster dimensional feature vectors. Each pattern corresponds to char-
prototype, then this will result in less stringent conditions for acteristics of one flower from one of the species of Iris. Three
membership. Therefore, the AFLC clusters have a self-scaling varieties of Iris are represented by 50 of the feature vectors.
factor which tends to keep dense clusters dense while allowing This data is popular in the literature and gives results by which
loose clusters to exist. AFLC can be compared to similar algorithms.

~ ~ ~ _ _ ___ ~~
798 IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 3, NO. 5, SEPTEMBER 1992

OUTPUT

( b)
Fig. 4. (a) Computed centroids of three Iris clusters based on all four feature
vectors. (b) Iris cluster classification results shown as a confusion matrix.
Fig. 3. Anderson Iris data represented by three-dimensional features.

We had 52 runs of the AFLC algorithm for the Iris data


for 13 different values of r, with four runs for each r.
Fig. l(c) shows the r-C graph. With Euclidean distance ratio
and r ranging between 4.5 and 5.5, the sample data were
classified into three clusters with only seven misclassifications.
The misclassified samples actually belonged to iris versicolor,
cluster #2, and were misclassified as iris virginica, cluster #l.
From Fig. l(c) it can be observed that the optimal number of
clusters can be determined from the r-C graph as the value
of C that has d C / d r = 0 (C # 1) for the maximum possible
range of r.
Fig. 3 shows the input Iris data clusters using only three
features for each sample data point. Fig. 4(a) shows the
computed centroids of the three clusters based on all four
features. The intercluster Euclidean distances are found to
be 1.75 (d12), 4.93 ( d 2 3 ) , and 3.29 (d13). The quantity d i j
is the intercluster distance between clusters i and j. The
comparatively smaller intercluster distance between clusters
1 and 2 indicates the proximity of these clusters. Fig. 4(b)
shows a confusion matrix that summarizes the classification
results.

B. Classification of Noisy Laser-Luminescent


Fingerprint Image Data
Fingerprint matching poses a challenging cluster prob-
lem. Recent developments in automated fingerprint identifi- ( b)
cation systems employ primitive and computationally inten- Fig. 5. (a) A noisy laser-luminescent fingerprint image. (b) The enhanced
sive matching techniques such as counting ridges between image of (a) by selective Fourier spectral filtering.
minutiae of the fingerprints [24]. Although the technique
of laser-luminescent image acquisition of latent fingerprint the enhanced image of Fig. 5(a) after to selective Fourier
often provides identifiable images [25], these images suffer spectral enhancement and band-pass filtering. We used the
from amplified noise, poor contrast, and uniform intensity. AFLC algorithm to cluster three different classes of fingerprint
Conventional enhancement techniques such as adaptive bina- images using seven invariant moment features [26], [27]
rization and wedge filtering provide enhancement but with a computed from images that are enhanced [26]. A total of 24
significant loss of information necessary for matching. Recent data samples are used, each sample being a seven-dimensional
work [26] presents a novel three-stage matching algorithm moment feature vector. These moment invariants are a set of
for fingerprint enhancement and matching. Fig. 5( b) shows nonlinear functions which are invariant to translation, scale,
IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 3, NO. 5, SEYEMBER 1992 799

I CLUSTER 1 CLUSTER CENTROIDVECTOR X;, i = 1.2 ...7


6 7
1
No. 1 2 3 4 5
1 199.48 15254.10 75.25 14.99 488.95 -822.35 278.50
2 262.52 15956.27 22750.70 4500.20 23888973.00 530694.06 -1 1096280.00
3 538.03 803.625 57.02 34.38 427.83 112.10 881.09

ACTUAL
\ 1 2 3

OUTPUT

( b)

Fig. 6. (a) Computed centroids of three fingerprint clusters in seven-dimensional vector space. (b) Fingerprint data classification results.

and rotation. The three higher order moment features are given shows a different approach to neural-fuzzy clustering by
less weights, thus reducing the effect of noise and leading to integrating a fuzzy C-means model with Kohonen neural
proper classification. The r-C graph for the fingerprint data in networks. A comparative study of these recently developed
Fig. l(c) shows a range of T from 3.0 to 4.5 for which proper neural-fuzzy clustering algorithms is needed.
classification resulted. The fingerprint data have also been
correctly classified by a k-nearest-neighbor clustering using ACKNOWLEDGMENT
only four moment features [26]. Euclidean distances of these
clusters indicate that the clusters are well separated, which is The authors gratefully acknowledge many excellent and
consistent with the comparatively larger range of T found for thoughtful suggestions from J. Bezdek for improving this
proper classification. Parts (a) and (b) of Fig. 5 represent one paper.
fingerprint class before and after enhancement. Fig. 6(a) shows
the computed centroids of three fingerprint clusters. Fig. 6( b) REFERENCES
shows a confusion matrix that indicates correct classification R. 0. Duda and P. E. hart, Pattern Classification and Scene Analysis.
results. New York Wiley, 1973.
J. C. Bezdek, Pattern Recognition with Fuzzy Objective Function Algo-
rithms. New York: Plenum, 1981.
A. K. Jain and R. C. Dubes, Algorithms for Clustering Data. Engle-
V. CONCLUSION wood Cliffs, NJ: Prentice-Hall, 1988.
K. Fukunaga, Introduction to Statistical Pattern Recognition. San
It is possible to apply many of the concepts of AFLC Diego: Academic Press, 1990.
operation to other control structures. Other approaches to fuzzy G. A. Carpenter and S. Grossberg, “A massively parallel architecture for
ART are being explored [20], [21] that could also be used a self-organizing neural pattern recognition machine,” Computer vision,
Graphics & Image Processing, vol. 37, pp. 54-115, 1987.
as the control structure for a fuzzy learning rule. Choices G. A. Carpenter and S. Grossberg, “ART2: Self organization of stable
also exist in the selection of class prototypes. With some category recognition codes for analog patterns,” Appl. Opt., vol. 2,
modification, any of these techniques can be incorporated into pp. 4919-4938, 1987.
R. Lippmann, “An introduction to computing with neural networks,”
a single AFLC system or a hierarchical group of systems. The IEEE ASSP Magazine, no. 4, pp. 4-21, 1987.
characteristics of that system will depend upon the choices L. I. Burke, “Clustering characterization of adaptive resonance,” Neural
Networks, vol. 4, pp. 485-491, 1991.
made. Y. H. Pao, Adaptive Pattern Recognition and Neural Networh. Read-
While AFLC does not solve all the problems associated with ing, MA: Addison-Wesley, 1989.
unsupervised learning, it does possess a number of desirable B. Moore, “ART 1 and pattern clustering,” in Proc. 1988Connectionist
Models Summer School, 1988.
characteristics. The AFLC architecture learns and adapts on- G. H. Ball and D. J. Hall, “ISODATA, An iterative method of multivari-
line, such that it is not necessary to have a priori knowledge ate data analysis and pattern classification,” in Proc. IEEE Int. Commun.
of all data samples or even of the number of clusters present Con$ (Philadelphia, PA), June 1966.
L. A. Zadeh, “Fuzzy sets,” Information and Control, vol. 8, pp. 338-
in the data. However the choice of r is critical and requires 353, 1965.
some a priori knowledge of the compactness and separation J. C. Bezdek, “A physical interpretation of fuzzy ISODATA,” IEEE
Trans. Syst., Man, Cybern., vol. 6, pp. 387-389, 1976.
of clusters in the data structure. Learning is match based, J. C. Bezdek, “A convergence theorem for the fuzzy ISODATA cluster-
ensuring stable, consistent learning of new inputs. The output ing algorithms,” IEEE Trans. Pattern Anal. Mach. Intell., vol. PAMI-2,
is a crisp classification and a degree of confidence for that no. 1, pp. 1-8, 1980.
E.E. Gustafson and W.C. Kessel, “Fuzzy clustering with a fuzzy
classification. Operation is also very fast, and can be made covariance matrix,” in Proc. IEEE CDC (San Diego, CA), 1979,
faster through parallel implementation. A recent study [28] pp. 761-766.
800 IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 3, NO. 5, SEPTEMBER 1992

(161 I. Gath and A. B. Geva, “Unsupervised optimal fuzzy clustering,” IEEE [22] R.N. Dave, “Characterization and detection of noise in clustering,”
Trans. Pattern Anal. Mach. Intell,, vol. 11, no. 7, pp. 773-781, 1989. Pattern Recognition Letters, vol. 12, no. 11, Nov. 1991.
[17] X. L. Xie and G. Beni, “A validity measure for fuzzy clustering,” IEEE [23] E. Anderson, “The Irises of the Gaspe peninsula,” Bull. Amer. Iris Soc.,
Trans. Pattern Anal. Mach. Intell,, vol. 13, no. 2, pp. 841-847, 1991. vol. 59, pp. 2-5, 1935.
[18] S.C. Newton and S . Mitra, “Self-organizing leader clustering in a [24] E. Kaymaz and S . Mitra, “Analysis and Matching of degraded and noisy
neural network using a fuzzy learning rule,” SPIE Proc. Adaptive Signal fingerprints,” SPIE Proc. Applications of Digital Image Processing XV,
Processing, vol. 1565, July 1991. vol. 1771, July 1992.
[19] S . C. Newton and S . Mitra, “Applications of neural networks in fuzzy [25] C. R. Menzel, “Latent fingerprint development with lasers,” ASTM
pattern recognition,” presented at the Research Conference on neural Standardization News, vol. 13, pp. 34-37, Mar. 1985.
Networks for Vision and Image Processing, Boston University, May [26] E. Kaymaz, “Fingerprint analysis and matching,” M.S.E.E. thesis, Texas
10-12, 1991. Tech University, Dec. 1991.
[20] P. Simpson, “Fuzzy adaptive resonance theory,” in Proc. Neuroengi- [27] M. K. Hu,“Visual pattern recognition by moment invariants,” IRE Trans.
neering Workshop, Sept. 1990. Inform. Theory, vol. IT-8,pp. 179-187, 1962.
[21] G. Carpenter, S . Grossberg, and D. Rosen, “Fuzzy ART: An adaptive [28] J.C. Bezdek, E.C.K. Tao, and N.R. Pal, “Fuzzy Kohonen clustering
resonance algorithm for rapid, stable classification of analog patterns,” networks,” in Proc. IEEE Int. Conf. Fuzzy Systems (San Diego), March
in Proc. 1991 Int. Joint Conf. Neural Networks, July 1991. 8-12, 1992, pp. 1035-1043.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy