Cellular Neural Networks
Cellular Neural Networks
Cellular Neural Networks
2.1 Introduction
The cellular neural network (abbreviated as CNN) is proposed by Chua and Yang
in 1988 [1]-[2]. It is more general than Hopfield neural network. The state value of
one node (cell) at the next time is influenced by inputs and outputs of nodes near this
node. Cellular neural networks have the characteristic of parallel processing. The next
states of all nodes can be evaluated at the same time. So the operation speed is very
fast. And because CNN has the characteristic of local connection of nodes, so CNN is
suited for realizing with VLSI. The structure of cellular neural networks is regular,
parallel array and local connection etc., not only suitable for the integrated circuit, but
also can deal with some special calculations that have the characteristic of local
regular connection. Cellular neural networks mostly were applied to the image
processing application at first, for example noise removal, edge detection etc.
Cell
3
Let us consider one M × N two-dimensional cell array. There are total M × N
cells arranged in M rows and N columns. The cell in ith row and jth column is labeled
Cij . The range of neighboring cells is called neighborhood. The number of
neighboring cells of every cell is the same, it is determined by neighborhood radius,
and this radius is different from the general round radius. The neighborhood of a cell
is defined as the following.
N ij (r ) = { C kl | max ( | k – i |, | l – j | ) ≤ r, 1 ≤ k ≤ M ; 1 ≤ l ≤ N } (2-1)
3x3 sphere
Cell Cij
of influence
Fig. 2.2 Radius r = 1, the range of cell Cij and its neighboring cells.
The relevant parameters of a cell are defined as follows and shown in Fig. 2.3 [3]:
u ij : Input of cell Cij ;
x ij : State of cell Cij ;
y ij : Output of cell Cij ;
I ij : Threshold value of cell Cij .
4
Cell Cij
uij
Input Output
State
Threshold xij yij
Iij
Fig. 2.3 The primary element of a cellular neural network - cell Cij .
+
Iyx Ry
Eij -
Iij C Rx ․․․
Ixu(i,j;k,l) Ixy(i,j;k,l)
The basic circuit structure of a cell is shown in Fig. 2.4 [1]. According to this
circuit structure, the motion equation of a cell Cij can be written by
dxij (t ) 1
C =− xij (t ) + ∑ A(i, j; k , l ) y kl (t ) + ∑ B(i, j; k , l )u kl + I ij
dt Rx C kl ∈N ij C kl ∈N ij (2-2)
1 ≤ i ≤ M ;1 ≤ j ≤ N
For being simple and convenient and without losing generality, we can suppose C =
R x = 1, then Equation (2-2) can be rewritten to Equation (2-3):
dxij (t )
dt
= − xij (t ) + ∑ A(i, j; k , l ) y
C kl ∈N ij
kl (t ) + ∑ B(i, j; k , l )u
C kl ∈N ij
kl + I ij
(2-3)
1≤ i ≤ M; 1≤ j ≤ N
5
According to view of neural networks, the meaning of each parameter of Equation
(2-3) is listed as follows:
x ij : State of cell Cij ;
y kl : Output of neighboring cell C kl of cell Cij ;
u kl : Input of neighboring cell C kl of cell Cij ;
I ij : Threshold value of cell Cij ;
A(i, j; k, l): The weighting of output of neighboring cell C kl of cell Cij ;
B(i, j; k, l): The weighting of input of neighboring cell C kl of cell Cij .
A(i, j; k, l), B(i, j; k, l) and I ij are total 2 × (2r + 1) 2 + 1 values. They determine the
behavior of a two-dimensional cellular neural network. We call that A = [A(i, j; k, l)]
is the feedback template and B = [B(i, j; k, l)] is the control template. The output y ij
is a piecewise linear function of x ij . It is given by the following output equation and
shown in Fig. 2.5 [1]:
yij (t ) = f (xij (t ) ) =
1
2
(
xij (t ) + 1 − xij (t ) − 1 ) (2-4)
y ij
-1 1 xij
-1
⎧ 1, xij < 1
⎪
dy ij
2. =⎨
dxij ⎪ 0, xij ≥ 1
⎩
3. If xij < 1 , then xij = y ij
Each cell is influenced by neighboring cells’ inputs and outputs. Cells’ inputs multiply
template B and cell’s outputs multiply template A, as Fig. 2.6 shows [3].
6
Multiply Multiply
Standard CNN: ( A, B, I )
template A template B
A B
Control Bias
B Iij
Template
Input from
Neighborhood
ukl
Feedback from
Neighborhood
y kl
If A(i, j; i, j ) > 1 , then after the transient decayed to zero, each cell of a cellular
neural network must stop in a stable equilibrium point. And the magnitude of each
stable equilibrium point is greater than 1. In other words, we have the following
7
characteristics:
lim xij (t ) > 1, 1≤ i ≤ M; 1≤ j ≤ N (2-5)
t →∞
and
lim y ij (t ) = ±1 , 1≤ i ≤ M; 1≤ j ≤ N (2-6)
t →∞
Namely if the center element A(i, j; i, j ) of template A satisfies A(i, j; i, j ) > 1 , then
the absolute value of the output of each cell in stable equilibrium point is equal to 1.
This guarantees that cellular neural networks have binary value output; this property
is essential for solving classification problems in image processing applications.
It can be said that cellular neural networks are evolved from Hopfield neural
network. Cellular neural networks are more general than Hopfield neural network.
Because in cellular neural networks each cell is influenced by only near cells, it is not
like Hopfield neural network, in which each cell is influenced by all other cells. So
cellular neural networks can be easily realized by VLSI technique. The followings are
some differences between cellular neural networks and Hopfield neural network:
(1) The weight matrix of Hopfield neural network must be symmetric, but the weight
matrices A and B of a cellular neural network are not necessary to be symmetric.
(2) Hopfield neural network is allowed to operate asynchronously, but cellular neural
networks must operate synchronously.
(3) The connections in the cellular neural network are local. Hopfield neural network
is a fully connected neural network. In general, the number of interconnections in
the cellular neural network is less than the number of interconnections in a
Hopfield neural network.
(4) The self-feedback coefficient A(i, j; i, j) of cell Cij in a cellular neural network is
greater than 1 in order to guarantee that the steady-state output of each cell is
either +1 or -1. This condition is always violated in a Hopfield neural network
since its diagonal coupling coefficients are all assumed to be zero [6].
The design of a CNN system is finding one or more templates that realize a
certain input-output behavior. The design methods of templates found in literature can
be divided into two classes:
Design by synthesis: given an explicit problem specification, a set of parameters is
found that satisfies the specified requirements.
8
Design by learning: given a vague description of the task through a large number of
input-target pairs, a learning process is applied that minimizes some kind of cost
function.
Rather than being alternatives, these two types of design can be considered as
supplementing methods. The choice between them is based on the type of information
that is available about the problem at hand. In case an explicit description of the
desired functional behavior is available, a synthesis method is used. In case only
implicit knowledge is available, a learning method is the obvious choice.
Harrer and Nossek recommend that discrete time cellular neural networks
(Discrete Time Cellular Neural Network, is abbreviated as DT-CNN) [8] is as the
discrete time version of CNN that Chua and Yang introduced (is abbreviated as
Chua−Yang-CNN here). The network dynamics of Chua−Yang-CNN is described by
a set of differential equations:
dxij (t )
dt
= − xij (t ) + ∑ A(i, j; k , l ) y
Ckl ∈N ij
kl (t ) + ∑ B(i, j; k , l )u
Ckl ∈N ij
kl (t ) + I ij
1≤ i ≤ M; 1≤ j ≤ N
t in the previous equation is a discrete time variable, then we can rewrite the previous
equation:
xij (h + 1) = ∑ A(i, j; k , l ) y
C kl ∈N ij
kl ( h) + ∑ B(i, j; k , l )u
C kl ∈N ij
kl (h) + I ij (2-8)
1≤ i ≤ M; 1≤ j ≤ N
DT-CNN is a timing system. A set of discrete motion equations describes its dynamic
9
behavior. At discrete time h, the state xij (h) of a cell Cij depends on time-invariant
input u kl and time-variant output y kl (h − 1) of its neighboring cell C kl . The output
function of DT-CNN is the hardlimiter activation function.
⎧+ 1 for x ij (h + 1) ≥ 0
y ij (h + 1) = f ( x ij (h + 1)) = ⎨
⎩− 1 for x ij (h + 1) < 0 (2-9)
1≤ i ≤ M; 1≤ j ≤ N
References
[1] L. O. Chua and Lin Yang, “Cellular Neural Networks: Theory,” IEEE Trans. on
CAS, vol. 35 no. 10, pp.1257-1272, 1988.
[2] L. O. Chua and Lin Yang, “Cellular Neural Networks: Applications,” IEEE Trans.
on CAS, vol.35, no.10, pp. 1273-1290, 1988.
[3] Leon O. Chua, CNN:A PARADIGM FOR COMPLEXITY. World Scientific,
1998.
[4] Special Issue on Chaotic Systems, Proc. IEEE, Aug. 1987.
[5] L. O. Chua and R. N. Madan, “The sights and sounds of chaos,” IEEE Circuits
Devices Mag., pp. 3-13, Jan., 1988.
[6] J. J. Hopfield, “Neural networks and physical systems with emergent
computational abilities,” Proc. Natl. Acad. Sci. USA., vol. 79, pp. 2554-2558,
1982.
[7] J. J. Hopfield, “Neurons with graded response have collective computational
properties like those of two-state neurons,” in Proc. Natl. Acad. Sci. USA, vol. 81,
pp. 3088-3092, 1984.
[8] H. Harrer and J. A. Nossek, “Discrete-time Cellular Neural Networks,”
International Journal of Circuit Theory and Applications, Vol. 20, pp. 453-468,
1992.
10