0% found this document useful (0 votes)
39 views8 pages

Application of Neural Networks For Class

This document discusses applying neural networks to classify eddy current NDT data. It provides an overview of neural networks and describes how they can be used for pattern classification in NDT. The paper also discusses preprocessing NDT signals and presents results comparing neural networks to other classification methods.

Uploaded by

Ahmad Aftab
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
39 views8 pages

Application of Neural Networks For Class

This document discusses applying neural networks to classify eddy current NDT data. It provides an overview of neural networks and describes how they can be used for pattern classification in NDT. The paper also discusses preprocessing NDT signals and presents results comparing neural networks to other classification methods.

Uploaded by

Ahmad Aftab
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 8

APPLICATION OF NEURAL NETWORKS FOR CLASSIFICATION OF EDDY CURRENT

NDT DATA

L. Udpa and S. S. Udpa

Department of Electrical En~ierg


Colorado State University
Fort Collins, CO 80523

INTRODUCTION

The inverse problem in nondestructiye evaluation involves the


characterization of flaw parameters given a transducer response signal.
In general the governing equations and boundary conditions describing the
underlying physical phenomena are complex. Consequently analytical closed
form solutions can be obtained only under. strong simplifying assumptions
with regard to geometry and linearity of the problem. This precludes
their use as direct inverse models for solving realistic NDT problems
necessitating the need for using indirect inverse models based on pattern
recognition algorithms. These inverse models classify the NDT signal as
belonging to one of the classes of defects stored in a data bank as shown
in Fig. 1.

Traditional pattern recognition techniques are based on the


mathematical formulation of discriminant functions which are hypersurfaces
in the feature space. These techniques involve, in most cases, extensive
use of a priori information such as the statistical distribution of the
feature vectors [1-3]. The accuracy of these methods therefore depend on
the validity of the a priori information used.

NOT Signal Signal


Processing ~
Pattern
Classification
__. Defect Class

Inverse model

Fig. 1. Schematic representation of the inverse problem in NDT.

Review of Progress in Quantitative NondesTructive Evaluation, Vol. 9 673


Edited by D.O. Thompson and D.E. Chimenti
Plenum Press, New York, 1990
Based on the observation that while computers are good at numerical
computing, the human brain is better at recognizing patterns, an alternate
approach motivated by the desire to mimic the brain has led to development
of neural network models. Artificial neural net models consist of a large
number of simple computational units that are densely interconnected, via
interconnection weights. The inherent parallelism of these models can be
used to rapidly select optimum weighted combinations of features to
construct a trainable pattern recognition system. During the training
process, the input patterns and corresponding desired response are
presented to the network. The model adjusts the weights in accordance
with a least squares adaptation algorithm minimizing the error between the
model output and the desired response. Once the weights are adjusted the
network can be used with various test patterns.

This paper studies the application of neural network models for


classifying defect signals from eddy current transducers used in
nondestructive evaluation of materials. The following section presents a
brief introduction to neural net models and the learning algorithm. The
preprocessing of the transducer signals for data compression is then
discussed. Finally the results of performance of the network are
presented and compared with results obtained by using the K-means
clustering algorithm.

NEURAL NETWORKS

The major characteristics of a neural net can be summarized as


follows.

l. Large number of simple processing elements (neurons).


2. Dense interconnection between the neurons (dendrons and axons).
3. Functionality of a network is determined by the interconnection
weights (synaptic strength).

Although this is an oversimplified model of the biological brain, the


organization and the information processing strategies of an artificial
neural network are based on the features of their biological counterparts.
The neurons combine the input impulses in several ways, operating in
parallel with other neurons to perform a variety of functions. In
artificial neural nets, each simple node performs a weighted sum of the
inputs and computes a nonlinear function of the results [4]. Three common
types of nonlinearities namely, hard limiters, threshold logic and
sigmoidal transformation, are shown in Figure 2. The major focus of
research in neural net models is the development of algorithms for
adapting the interconnection weights and optimizing the network
architecture.

f 1 (x) f 2 (x)

+1

·1

hardlimiter

Fig. 2.
X

-+=· --f=-·
threshold logic sigmoid

Three common nonlinearities used in the nodes of a neural


network.

674
N
x3 y = f( L wixi
j=1

Fig. 3. A single layer perceptron.

One of the earliest networks, developed by Rosenblatt, is the single


layer perceptron [5] used as an adaptive two pattern classifier in a multi
dimensional space. In its simplest form it consists of a layer of input
nodes connected to one output node via interconnection weights as shown in
Fig. 3. The response of the output node is the weighted sum of the input
vector. Classification is simply based on the value of this response
function.

The adaptive learning algorithm is simply based on a reward and


punishment concept. If a presented pattern is correctly classified the
weights are left unchanged and on misclassification, the weights are
modified according to a simple rule. The major limitation of this network
is that it generates only linear decision boundaries. Since in most
situations the classes are seldom linearly separable, this led to a
recession of interest in the area. However it is now known that by
introducing additional layers of nodes between the input and output layers
as shown in Fig. 4 one can generate nonlinear decision surfaces. The
intermediate layers extract higher order correlation in the signals and in
general can produce arbitrary decision boundaries. This increased
flexibility is however achieved at the expense of a more complex training
algorithm which is described next.

_____.
input
nodes
Fig. 4. A multilayered perceptron.

675
Learning Algorithm

The backward error propagation algorithm [6] relies on a recursive


procedure to estimate the weights by minimizing the error in response at
each layer, using the gradient procedure. Since this requires continuous
differentiable functions, each node of the network computes the sigmoidal
transformation of the weighted sum of its inputs.

The basic steps involved in the learning algorithm are as follows:

1. Initialize all weights wil) in layer 1 and wj~) in layer 2 to small


random values.

2. Present the training data by applying the input vector ~to the input
nodes and the corresponding desired outputs to the output nodes.

3. Calculate the actual output of the network using the sigmoid


nonlinearity. The output of the jth hidden unit is

1
Yj y~
(1)
1 + e J

(1)
where y~
J 2
i
w..
l.J
X. +
l.
(}.
J
(2)

and 0. is an offset. The response of the kth output node is


J

1
zk = (3)
z'
k
1 + e

(2)
where zk = -
2j wjk yj + t/>k (4)

and t/>k is an offset.

4. Calculate the error signal at the output layer

l 2
E
2 2k I~- zkll (5)

If error E < ~. the network is 'trained'.

5. Minimize the error with respect to the interconnection weights using


the conjugate gradient method. Adapt weights by using [6]

wj~) (n+l) = wj~)(n + E6kzk (6)

where f is the learning rate and

__Q£_
6k- (2) zk(~ -k -zk)(l-zk) (7)
awjk

(8)

676
where
2
k
(9)

Go to step 2.

The learning procedure is entirely deterministic and can be easily


implemented in a parallel environment.

SIGNAL PREPROCESSING

The eddy current signals were first processed using the Fourier
descriptor method [7] to obtain a parametric representation of the signal.
The preprocessing stage provides a significant amount of data compression,
thereby avoiding problems of combinatorial explosion. In addition, the
classification performance of the neural net is rendered insensitive to
instrument gain drift and zero fluctuations, since the Fourier descriptor
representation is invariant under translation, rotation and scaling.

Briefly, the method involves the representation of the eddy current


probe signal as a function of its arc length. Since this function is
periodic for closed curves, it can be expanded in a Fourier Series as
explained by the following equations.

Consider a simple clockwise oriented smooth curve ~ as shown in


Fig. 5 which is parametrically represented as a function of the arc length

i.e. Z(2)- [x(2), y(2)]

If L represents the length of the curve, then

Z(L + 2) - Z(2) (10)

Let denote the angular direction at a point Z(2) located 2 arc


length units from an arbitrary starting point Z(O). If the cumulative
angular function ¢(2) is defined as the net change in the angular
direction at a point 2 with reference to the starting point Z(O), then

¢(2) 8(2) - B(O) ( 11)

¢(0) 0 (12)

¢ (L) -21r ( 13)

In order to obtain a representation that is invariant under


rotation, translation and scaling a normalized version ¢*(t) of the
cumulative angular function is derived.

L=Arc Length
(x(O),y(O))
. 8(0)

Fig. 5. A simple closed curve represented by the angular function 8(2)


and cumulative angular function ¢(2) as functions of the arc
length 2.

677
¢*(t) = ¢ (~) + t t f [0,21r] (14)

Using equations (12) and (13) we have

¢*(0) = ¢*(21r) = 0

The periodic nature of ¢*(t) allows us to expand it in the form of


a Fourier series to obtain the Fourier descriptors (~. ak)

¢*(t) a
0
+ 2 (ak cos kt + bk Sin kt) (15)
k=l

a0 + 2 ~ Cos (kt - ak) (16)


k=l

where Ak (17)

-1
and ak = tan (18)

The magnitude coefficients Ak are invariant under translation, rotation


and scaling of the eddy current signal. A set of eight coefficients was
used as the pattern vector input to the network.

EXPERIMENTAL RESULTS

The neural network classifier used is shown in Fig. 6. It consisted


of a layer of 8 input nodes, a hidden layer with 5 hidden nodes and an
output layer with 3 output nodes. The network was trained to classify
seven different classes of defects as described in Fig. 7. The defect
classes correspond to the seven output vectors {(001), (010), (011),
(100) (101), (11), (111)). A set of 59 training data was used to
establish the interconnection weights. The network was simulated on the
VAX 3600 computer. The algorithm converged after 647 iterations when the
learning rate was set at 0.1. The performance of the neural net was
compared with the classification obtained using the K-Means clustering
algorithm. The classification results are summarized in Table 1.

___.
__..
___.
output vector
input
___. = (0,0,0) class 1
~
vector X ___. (0,0, 1)
(0, 1 ,0)
class
class
2
3
___. (0,1,1)
(1 ,O,o)
class
class
4
5
(1 ,0, 1) class 6
___. (1,1,0) class 7

___.
input
nodes
Fig. 6. The two layered neural network used for 7 class
identification.

678
CONCLUSIONS

A two layered artificial neural network was trained to identify


seven different classes using a training set of 59 signals. The seven
classes were identified using only three output nodes thereby minimizing
the weights to be estimated. Rather than presenting the entire signal as
input, a set of eight Fourier descriptors, representing the signal, served
as input. The major issues considered in this implementation are
classification accuracy and learning time.

O
10.875"
1 Typ. r
lnconel 600 Tube

Through Wall Hole Defect


tDiameter = a
0.050".-...j 1--
Typ.
conel 600 Tube
Width =a
Axisymmetric OD Slot
T

[J
Depth= b
inconel 600 Tube
I I
1 1 Diameter =a
I I
I 0 Flat Bottomed Hole
I I

Depth= b

lnconel 600 Tube


Width =a Axisymmetric ID Slot
T
Depth= b

ff
Fig. 7.
a
-
lnconel 600 Tube
Denting

Description of some of the defect classes.

The performance of the network classifier is very encouraging as


compared to the results obtained earlier using a partially trained K-Means
clustering algorithm. In addition, the network classifier has the
capability of flagging down ambiguous data rather than misclassifying it.
Neural nets also offer the advantage of being able to classify signals at
speeds that are independent of the number of prototypes stored in the data
base. No study was done on optimizing the number of hidden units in the
network for improving the performance. The number of iterations and the
classification accuracy required for training the network depends on the
tolerance allowed for the 0 and 1 states of the output nodes. Also, the
convergence speed can be improved by using a momentum term in the gradient
algorithm. In conclusion neural network classifiers offer a powerful tool
for signal interpretation in NDT.

679
Table 1 Summary of Classification Results

Input Vector Neural True


Net K-Means Class.

1 . 877 .688 .682 .669 .796 .354 .456 .589 1 0 l# 7# 1


2 .879 .632 .683 .676 .821 .270 .434 .258 0 1 0 2 2
3 .875 .585 .702 .644 .796 .314 .467 .290 0 1 0 2 2
4 .876 .621 .704 .643 .806 .286 .472 .277 0 1 0 2 2
5 .890 .347 .669 .700 .824 .333 .509 .800 0 0 1 1 1
6 .886 .392 .632 .630 .749 .238 .402 .676 0 0 1 1 1
7 .887 .404 .632 .634 .749 .213 .393 .677 0 0 1 1 1
8 .885 .373 .639 .631 .743 .241 .414 .676 0 0 1 1 1
9 .886 .393 .637 .627 .746 .227 .410 .677 0 0 1 1 1
10 .880 . 572 .675 .702 .807 .292 .394 .219 0 1 0 2 2
ll .880 .570 .669 .706 .808 .297 .391 .220 0 1 0 2 2
12 .876 .473 . 692 .613 .798 .231 .501 .272 0 1 1 3 3
13 .877 .451 .689 .628 .798 .241 .488 .249 0 1 1 3 3
14 . 872 .531 .738 .562 .757 .299 .543 .195 0 1 0 2 2
15 .858 .354 .761 .587 .882 .143 .640 .159 0 l 1 3 3
16 .855 .990 .805 .609 .859 .293 .684 .779 1 0 0 4 4
17 .856 .993 .799 .612 .866 .275 .682 .779 1 0 0 4 4
18 .856 .995 .800 .6ll .863 .283 .679 .778 1 0 0 4 4
19 .855 .985 .802 .606 .860 .293 .685 .779 1 0 0 4 4
20 .860 .894 .779 .595 .839 .305 .656 .717 1 0 0 4 4
21 .860 .895 . 779 .591 .841 .292 .662 .716 1 0 0 4 4
22 .872 .434 .710 .606 .809 .217 .536 .217 0 1 1 3 3
23 .832 .257 .989 .203 .849 .075 . 972 .988 1 0 1 l# 5
24 .990 .121 .215 .585 .214 .639 .168 .460 1 1 0 6 6
25 .878 .832 .682 .734 .849 .296 .480 .363 1 1 1 7 7
26 .875 .248 .829 .636 .908 .500 .845 .999 * 0 1 5 5
27 .901 .309 .708 .902 .981 .595 .655 .999 1 0 1 5 5
28 .831 .260 .995 .193 .842 .046 . 977 .990 1 0 1 1# 5
29 .833 .276 .988 .207 .852 .057 .963 .990 1 0 1 l# 5
30 .833 .255 .989 .207 .848 .ll3 .978 .987 1 0 1 l# 5
31 .976 .070 .076 .616 .301 .559 .135 .472 1 1 0 6 6
32 .976 .075 .076 .620 .308 .536 .130 .472 1 1 0 6 6
33 .995 .133 . 272 .630 .204 .590 .121 .465 1 1 0 6 6
34 1.00 .153 .303 .500 .127 .795 .245 .442 1 1 0 6 6
35 .995 .143 .280 .620 .214 .597 .143 .464 1 1 0 6 6
36 .867 .806 . 717 .670 .839 .267 .529 .381 1 1 1 7 7
37 .865 .859 .727 .673 .839 .278 .546 .426 1 1 1 7 7
38 .867 .872 . 718 .711 .849 .285 .513 .239 1 1 1 7 7

# Misclassified Result

REFERENCES

l. C. H. Chen, Pattern Reco~nit and Artificial Inteli~c, edited


by c. H. Chen (Academic, New York, 1976).
2. J. T. Tou and R. C. Gonzalez, Pattern Recognition Princi:Qles
(Addison Wesley, Reading, Massachusetts, 1974).
3. G. H. Ball, Report No. RADC-TR-66-514, AD 643287, Stanford Research
Institute, Menlo Park, California, 1966.
4. R. P. Lippmann, IEEE ASSP Magazine (1987).
5. R. Rosenblatt, Princi:Qles of Neurodxnamics (Spartan Books, New York,
1959).
6. D. E. Rurnelhart, G. E. Hinton and R. J. Williams, in Parallel
Distributed Procesin~: Ex:Qloration in the Microstructure of
Cognition, edited by D. E. Rurnelhart and J. L. McClelland, (M.l.T.
Press, Cambridge, MA, 1986).
7. s. s. Udpa and W. Lord, Nondestructive Testing Communications, l. 65
(1983).

680

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy