Ann
Ann
By
P Devi Priya
Overview
• Introduction to Artificial Neural Network
• A Brief History of Artificial Neural Network
• Biological Neuron
• Schematic representation of Biological Neuron
• Artificial Neural Network vs Biological Neurons
• Architecture of an Artificial Neural Network
• Neuron Functionality and Activation
• Types of Artificial Neural Network
• Building blocks of ANN
• Appropriate problems for Neural Network Learning
• Applications of ANN
• Advantages of ANN
• Disadvantages of ANN
Introduction to Artificial Neural Networks
Artificial Neural Networks (ANNs) are
computational models inspired by the
structure of human brain, with
interconnected neurons processing
Information to solve tasks such as
classification , prediction and clustering.
• It is Modelled after biological neurons,
Neurons are the small units which will
help in stimulating the response.
• ANNs is widely used for complex pattern
recognition and decision-making.
A Brief History of Artificial Neural Network
ANN during 1940s to 1960s
• 1943 − In 1943, Warren McCulloch and Walter Pitts created a basic
neural network model using electrical circuits to describe brain
neuron functions. This work initiated neural network theory.
• 1949 − Donald Hebb’s book, The Organization of Behavior, put forth
the fact that repeated activation of one neuron by another increases
its strength each time they are used.
• 1956 − An associative memory network was introduced by
Taylor.1958 − A learning method for McCulloch and Pitts neuron
model named Perceptron was invented by Rosenblatt.
• 1960 − Bernard Widrow and Marcian Hoff developed models called
"ADALINE" and “MADALINE.”
ANN during 1960s to 1980s
• 1961 − Rosenblatt made an unsuccessful attempt but proposed the
“backpropagation” scheme for multilayer networks.
• 1964 − Taylor constructed a winner-take-all circuit with inhibitions among
output units.
• 1969 − Multilayer perceptron (MLP) was invented by Minsky and Papert.
• 1971 − Kohonen developed Associative memories.
• 1976 − Stephen Grossberg and Gail Carpenter developed Adaptive
resonance theory.
ANN from 1980s to present
• 1982 − The major development was Hopfield’s Energy approach.
• 1985 − Boltzmann machine was developed by Ackley, Hinton, and
Sejnowski .
• 1986 − Rumelhart , Hinton, and Williams introduced Generalised Delta
Rule.
• 1988 − Kosko developed Binary Associative Memory (BAM) and also gave
the concept of Fuzzy Logic in ANN.
Biological Neuron
• Nerve cell (neuron) is a special biological cell that processes information.
According to an estimation, there are huge number of neurons,
approximately 1011 with numerous interconnections, approximately 1015.
Working of a Biological Neuron
A typical neuron consists of the following four parts ;
• Dendrites − They are tree-like branches, responsible for receiving the
information from other neurons it is connected to. In other sense, we can
say that they are like the ears of neuron.
• Soma − It is the cell body of the neuron and is responsible for processing
of information, they have received from dendrites.
• Axon − It is just like a cable through which neurons send the information.
• Synapses − It is the connection between the axon and other neuron
dendrites.
Schematic representation of Biological Neuron
ANN vs Biological Neurons
• The concept of ANNs comes from biological neurons , and also the structure
is inspired by biological neurons.
• A biological neuron has a cell body or soma to process the impulses,
dendrites to receive them, and an axon that transfers them to other
neurons.
• The input nodes of artificial neural networks receive input signals, the
hidden layer nodes compute these input signals, and the output layer nodes
compute the final output by processing the hidden layer’s results using
activation functions.
Continuation of ANN vs Biological Network
Biological Neuron Artificial Neuron
Dendride Inputs
Cell Nucleus or Soma Nodes
Synapses Weights
Axon Output
Synaptic plasticity Back propogation
The architecture of an ANN
An ANN comprises an input layer ,
hidden layers and an output layer . Each
layer contains neurons that apply
weights and biases to inputs , simulating
learning and decision-making.
-Components:
• Input Layer : It will receive the input
signals / data.
• Hidden Layers : This layer is
responsible for data processing and
generating the output.
• Output Layer : It produces final results.
Neuron Functionality and Activation
Each Neuron takes inputs (x1,x2..,xn),
applies weights(w1,w2,…,wn) , adds
bias(b), and passes the result through
an activation function(f), determines
and sends output.
Core components:
• Weights: Strength of connections.
• Bias: Adjusts outputs for flexibility.
Types of Artificial Neural Networks
Various ANN types handle different tasks , each with unique structures.
-Network Types:
• Feedforward Neural Networks(FNN): This is one of the most basic
ANNs.in this ANN, the data or the input provided travels in a single
direction . It enters in to the ANN through the input layer and exits
through output layer.
• Convolutional Neural Network(CNN):It is specialized for image data .
In this network the connections between units have weights that
determine the influence of one unit on another unit.
• Recurrent Neural Networks(RNN):The RNN saves the output of a
layer and feeds this output back to the input to better predict the
outcome of the layer.
Continuation of Types of ANN
• Modular Neural Network: A Modular Neural Network contains a collection
of different neural networks that work independently towards obtaining the
output with no interaction between them. Each of the different neural
networks performs a different sub-task by obtaining unique inputs
compared to other networks.
• Radial basis function Neural Network: Radial basis functions are those
functions that consider the distance of a point concerning the center. RBF
functions have two layers. In the first layer, the input is mapped into all the
Radial basis functions in the hidden layer and then the output layer
computes the output in the next step. Radial basis function nets are
normally used to model the data that represents any underlying trend or
function.
Building Blocks of ANN
Processing of ANN depends upon the following three building blocks-
1.Network Topology
A network topology is the arrangement of a network along with its
nodes and connecting lines. According to the topology, ANN can be
classified as the following kinds −
❑Feedforward Network
• Single layer feedforward network
• Multilayer feedforward network
❑Feedback Network
• Recurrent networks
• Fully Recurrent network
• Jordan network
2.Adjustments of Weights or Learning
➢Supervised Learning
➢Unsupervised Learning
➢Reinforcement Learning
3.Activation Functions
• It may be defined as the extra force or effort applied over the
input to obtain an exact output.
• In ANN, we can also apply activation functions over the input to
get the exact output.
Followings are some activation functions of interest −
➢Linear Activation Function
It is also called the identity function as it performs no input
editing. It can be defined as −
F(x)=xF(x)=x
➢Sigmoid Activation Function
1.Binary sigmoidal function
F(x)=sigm(x)=1/1+exp(−x)
• Hardware dependence.
• Unexplained behavior of the network.
• Determination of proper network structure.
• Difficulty of showing the problem to the network.
• The duration of the network is unknown.
Thank you