GRNN For Forecasting Resonance Frequency of Circular Patch Antenna
GRNN For Forecasting Resonance Frequency of Circular Patch Antenna
GRNN For Forecasting Resonance Frequency of Circular Patch Antenna
Publisher
International Journal of Research and Reviews in Soft and Intelligent Computing (IJRRSIC)
Vol. 1, No. 2, June 2011
ISSN: 2046-6412
Copyright Science Academy Publisher, United Kingdom
www.sciacademypublisher.com
GRNN for Forecasting Resonance Frequency of Circular Patch
Antenna
Shuwen Chen
1
and Quan Hua
2
1
School of Information Science and Engineering, Southeast University, Nanjing, P. R. China
2
School of Information and Communication Engineering, Nanjing University of Post and Telecommunication, Nanjing, P. R. China
Email: chenshuwen@126.com
Abstract Microstrip patch antennas have been studied extensively. There is a close relationship between circular patch
antenna geometrical parameters and its resonance frequency. In this work, the Generalized Regression Neural Network (GRNN)
is implemented to forecast resonance frequency of circular patch antenna according to its radius parameter. After training, the
GRNN can predict resonance frequencies of circular patch antennas successfully with satisfying accuracy, which have different
radius parameters in a local region. The experimental result demonstrates the high degree of accuracy of the GRNN forecasting
method.
1. Introduction
Traditional analytical methods for the resonance frequency
of circular patch antenna include theoretical computation [1-3]
and actual measurement. With the development of computer
technology, a number of numerical methods (time domain or
frequency domain) have been presented, such as MOM, FEM,
BEM, FDTD, to apply to solve microwave radiation and
scattering problem. These methods usually consume much
computing time due to large computing quantity and storage
quantity. However, using empirical formula or looking-up
table methods may limit accuracy of processing problem. So,
the Generalized Regression Neural Network (GRNN) method
is presented to simulate nonlinear relationship between circular
path antenna radius and its resonance frequency.
As a new kind of computing model, artificial neural
network (ANN) has two advantages which traditional
numerical methods do not have:
- A perfect non-linear mapping performance.
- A lower demand for much empirical knowledge of the
modeled objects.
The GRNN network has some strong advantages in
approximation quantity, classification quantity and learning
speed compared with BP network and RBF network [7-9]. The
network finally converges to optimal regression surface of
samples. Furthermore, the prediction effect is satisfying even
lacking in sample data, too. In addition, the GRNN method
also can handle the instable data.
The structure of the rest of this paper is organized as
follows: next section 2 gives detailed description of the general
principles of GRNN; experiments in section 3 demonstrate the
effectiveness of the GRNN model, final section 4 is devoted to
conclusions.
2. General Principle of GRNN
2.1. Basic Structure of GRNN
GRNN [4, 5] consists of a radial basis function network
layer and a linear network layer, shown in figure 1.
Figure 1. Basic structure of GRNN neuron
In this network, P represents input vector; R represents the
input dimension of input vector; S represents the neurons
number of each network layer and the number of training
samples; b
1
represents the threshold of hidden layer and output
layer do not have a threshold factor. The product of and
b
1
use the symbol in radial basis layer.
2.2. The Principle of GRNN Model
The basis function of hidden layer nodes in network takes
Gaussian function,
2 2
2
( )
i
x c
R x e
o
= (1)
as the network transfer function, where represents the
smooth factor [6], which is the only parameter of GRNN
network to be adjusted artificially. The learning process of
network primarily depends on the data sample.
dist
i
o
International Journal of Research and Reviews in Soft and Intelligent Computing (IJRRSIC) 24
The GRNN network is a kind of forward neural network
[7], which can be divided into four layers: the input layer,
pattern layer, summation layer and output layer, shown in
figure 2. The input and output vectors are
1 2
[ , , , ]
T
n
x x x x = and
1 2
[ , , , ]
T
n
y y y y = ,
respectively.
Figure 2. Topology of GRNN network
The number of neurons of input layer is equal to the
dimension m of input vector of training samples. The input
layer directly delivers the various elements of the input vector
to the pattern layer. The weight value function is
( )
2
1
1
1
, 1, 2, ,
R
i ji
j
i
dist P w j S
=
= =
(2)
The number of neurons of pattern layer is equal to the
quantity n of learning samples. Each neuron corresponds to
different sample. The transfer function of neuron i of pattern
layer is
( ) ( )
2
, 1, 2, ,
2
T
i i
i
x x x x
P i m
o
= =
(3)
where
i
x is the learning sample corresponding to neuron i of
pattern layer. The number of neurons of output layer is equal
to the dimension l of output variable of learning samples. The
final output value of the jth neuron of output layer results from
(4):
, 1, 2, ,
j
j
D
S
y j l
S
= = (4)
The theoretical base of GRNN is nonlinear regression
analysis. The joint probability density function of variable x
and y is ( ) , f x y , and when the measured value x is X , the
regression of y with respect to X is:
( )
( )
,
[ / ]
,
y f X y dy
Y E y X
f X y dy
+
= =
(5)
Where