ML Unit 3 Notes

Download as pdf or txt
Download as pdf or txt
You are on page 1of 10

[ARTIFICIAL NEURAL NETWORK] Unit3

Introduction to Artificial Neural Network:


An Artificial Neural Network (ANN) is a mathematical model that tries to simulate the structure and
functionalities of biological neural networks. Basic building block of every artificial neural network is
artificial neuron, that is a simple mathematical model. Such a model has three simple sets of rules:
multiplication, summation and activation. At the entrance of artificial neuron the inputs are weighted
what means that every input value is multiplied with individual weight. In the middle section of artificial
neuron is sum function that sums all weighted inputs and bias. At the exit of artificial neuron the sum of
previously weighted inputs and bias is passing through activation function.

Fig. 1.1 Diagram of Biological Neural Network.

Fig. 1.2 Diagram of Artificial Neural Network.


Relationship between Biological neural network and artificial neural network:

Biological Neural Network Artificial Neural Network


Dendrites Inputs
Cell nucleus Nodes
Synapse Weights
Axon Output

How Neural Network work:


Let’s understand with an example of how a neural network works:
Consider a neural network for email classification. The input layer takes features like email
content, sender information, and subject. These inputs, multiplied by adjusted weights, pass
through hidden layers. The network, through training, learns to recognize patterns indicating
whether an email is spam or not. The output layer, with a binary activation function, predicts

DCST(3rd year, 6th sem) Page 1


[ARTIFICIAL NEURAL NETWORK] Unit3

whether the email is spam (1) or not (0). As the network iteratively refines its weights through
back propagation, it becomes adept at distinguishing between spam and legitimate emails,
showcasing the practicality of neural networks in real-world applications like email filtering.

Architecture of Artificial Neural Network:


Artificial Neural Network primarily consists of three layers:

Input Layer:
First is the input layer. This layer will accept the data and pass it to the rest of
the network.

Hidden Layer:

The second type of layer is called the hidden layer. The hidden layer presents in-
between input and output layers. It performs all the calculations to find hidden features
and patterns. Hidden layers are either one or more in number for a neural
network

Output Layer:

The last type of layer is the output layer. The input goes through a series of
transformations using the hidden layer, which finally results in output that is conveyed
using this layer. The output layer holds the result or the output of the problem.

The artificial neural network takes input and computes the weighted sum of the inputs and
includes a bias. This computation is represented in the form of a linear function.

It determines weighted total is passed as an input to an activation function to produce the output.
Activation functions choose whether a node should fire or not. Only those who are fired make it
to the output layer. There are distinctive activation functions available that can be applied upon
the sort of task we are performing.

Activation Function: An activation function is a mathematical equation that determines the output of
each element in the neural network. It takes in the input from each neuron and transforms it into
an output, usually between one and zero or between -1 and one. It may be defined as the extra
force or effort applied over the input to obtain an exact output. In ANN, we can also apply
activation functions over the input to get the exact output.

DCST(3rd year, 6th sem) Page 2


[ARTIFICIAL NEURAL NETWORK] Unit3

Activation function decides whether a neuron should be activated or not by calculating the
weighted sum and further adding bias to it. The purpose of the activation function is to
introduce non-linearity into the output of a neuron.

Network topology of Artificial Neural Network:

1. Single Layer Network: A single layer neural network contains input and output layer, and
one hidden layer between input and output layer. The input layer receives the input signals
and the output layer generates the output signals accordingly.

Fig. 1.2 Diagram of Single Layer Network.

2. Multi Layer Network: A multi layer ANN having more than one hidden layer. As this
network has more layers between the input and the output layer, it is called hidden
layers.

DCST(3rd year, 6th sem) Page 3


[ARTIFICIAL NEURAL NETWORK] Unit3

Fig. 1.3 Diagram of Multi Layer Network.

Model of Artificial Neuron Network:

Perceptron: A perceptron is a basic fundamental model of learning in a neural network. It consisits of


some input values, weights and a bias, a weighted sum and an activation function. Multiple perceptrons
are added to make a complex neural network.

Each connection in a neural network has an associated weight, which changes in the course of
learning. According to it, an example of supervised learning, the network starts its learning by
assigning a random value to each weight. Calculate the output value on the basis of a set of
records for which we can know the expected output value. This is the learning sample that
indicates the entire definition. As a result, it is called a learning sample. The network then
compares the calculated output value with the expected value. Next calculates an error
function(E).

Fig. 1.4 Diagram of Perceptron Model.

Back Propagation Algorithm:


In machine learning the back propagation is a widely used algorithm for training of artificial neural
network(ANN). The back propagation algorithm learns the weights of a layered ANN. It uses a gradient
descent rule to minimize the square error between target values and actual output values of network.

DCST(3rd year, 6th sem) Page 4


[ARTIFICIAL NEURAL NETWORK] Unit3

The back propagation algorithm is used to search a large hypothesis space of all weights of the ANN
network. In back propagation algorithm , the total losses are back propagated into the neural network
to know the loss of each node. Then the weight of each node is updated to minimize the loss by each
node’s. Ann uses back propagation as a machine learning algorithm to compute a gradient descent with
respect to weights.

Steps of back propagation algorithm:

Step1 : in first step inputs are arrived through the connected path.
Step2 : The input is modeled using true weights W. Weights are usually
Chosen randomly.
Step3 : Calculate the output of each neuron from the input layer to the
Hidden layer to the output layer.
Step4 : Calculate the error in the outputs.
Step5 : From the output layer, go back to the hidden layer to adjust the
Weights to reduce the error.
Step6 : Repeat the process until the desired output is achieved.

Application of ANN:
a) Stock price prediction.
b) Fingerprint recognition.
c) Loan application approval prediction.
d) Autonomous vehicle driving using ANN.

Advantages of ANN:
a) Pattern Recognition : Their proficiency in pattern recognition renders them efficacious in
tasks like as audio and image identification, natural language processing, and other
intricate data patterns.
b) Parallel processing capability : Artificial neural networks have a numerical value that can
perform more than one task simultaneously.
c) Non-linearity : Neural networks are able to model and comprehend complicated
relationships in data by virtue of the non-linear activation functions found in neurons,
which overcome the drawbacks of linear models.
d) Work with incomplete knowledge : After ANN training, the information may produce output
even with inadequate data. The loss of performance here relies upon the significance of
missing data.

DCST(3rd year, 6th sem) Page 5


[ARTIFICIAL NEURAL NETWORK] Unit3

Disadvantages of ANN:
a) Requirement for large dataset : For efficient training, artificial neural networks need large
datasets; otherwise, their performance may suffer from incomplete data.
b) Computational Power : Large neural network training can be a laborious and
computationally demanding process that demands a lot of computing power.
c) Need of proper network structure : There is no particular method to determining the
structure of artificial neural networks. The appropriate network structure is accomplished
through experience, trial, and error.
d) Process duration is unknown : The network is reduced to a specific value of the error, and
this value does not give us optimum results.

Face Recognition:
A general face recognition system includes four steps: face detection, preprocessing, feature extraction,
and face recognition.

Image Face detection Pre-processing Face extraction Face recognition Identification

a) Face detection : The main function of this step is to detect the face from capture image or the
selected image from the database. This face detection process actually verifies that weather the
given image has face image or not, after detecting the face this output will be further given to
the pre-processing step.
b) Pre-processing : This step is working as the pre-processing for face recognition, In this step the
unwanted noise, blur, varying lightening condition, shadowing effects can be remove using pre-
processing techniques .once we have fine smooth face image then it will be used for the feature
extraction process.
c) Face extraction : In this step features of face can be extracted using feature extraction
algorithm. Extractions are performed to do information packing, dimension reduction, and
noise cleaning. After this step, a face patch is usually transformed into a vector with fixed
dimension.
d) Face recognition : Once feature extraction is done step analyzes the representation of each
face, this last step is used to recognize the identities of the faces for achieving the automatic
face recognition, for the recognition a face database is required to build. For each person,
several images are taken and their features are extracted and stored in the database. Then
when an input face image comes for recognition, then it first performs face detection,
preprocessing and feature extraction, after that it compare its feature to each face class which
stored in the database.

DCST(3rd year, 6th sem) Page 6


[ARTIFICIAL NEURAL NETWORK] Unit3

Previous year questions:


1. What is back propagation?
In machine learning the back propagation is a widely used algorithm for training of artificial
neural network(ANN). The back propagation algorithm learns the weights of a layered ANN. It
uses a gradient descent rule to minimize the square error between target values and actual
output values of network.

The back propagation algorithm is used to search a large hypothesis space of all weights of the
ANN network. In back propagation algorithm , the total losses are back propagated into the
neural network to know the loss of each node. Then the weight of each node is updated to
minimize the loss by each node’s. Ann uses back propagation as a machine learning algorithm to
compute a gradient descent with respect to weights.

2. Write the application of NN(Neural Network).


▪ Stock price prediction.
▪ Fingerprint recognition.
▪ Loan application approval prediction.
▪ Autonomous vehicle driving using ANN.
▪ Marketing and sales: When you log onto E-commerce sites like Amazon and Flipkart,
they will recommend your products to buy based on your previous browsing history.
Similarly, suppose you love Pasta, then Zomato, Swiggy, etc. will show you restaurant
recommendations based on your tastes and previous order history.
▪ Facial recognition : The facial recognition system is now the order of modern
technology. The system is programmed to recognize and match the faces of human
beings with related digital pictures. This technology is commonly applied in offices to
perform specific entries where the system is programmed to verify the human face and
match it with the listed IDs available in the database.
3. Write the algorithm of Backpropagation.
Step1 : in first step inputs are arrived through the connected path.
Step2 : The input is modeled using true weights W. Weights are usually
Chosen randomly.
Step3 : Calculate the output of each neuron from the input layer to the
Hidden layer to the output layer.
Step4 : Calculate the error in the outputs.
Step5 : From the output layer, go back to the hidden layer to adjust the
Weights to reduce the error.
Step6 : Repeat the process until the desired output is achieved.

4. What is Activation Function.

DCST(3rd year, 6th sem) Page 7


[ARTIFICIAL NEURAL NETWORK] Unit3

An activation function is a mathematical equation that determines the output of each element
in the neural network. It takes in the input from each neuron and transforms it into an output,
usually between one and zero or between -1 and one. It may be defined as the extra force or
effort applied over the input to obtain an exact output. In ANN, we can also apply activation
functions over the input to get the exact output.
Activation function decides whether a neuron should be activated or not by calculating the
weighted sum and further adding bias to it. The purpose of the activation function is to
introduce non-linearity into the output of a neuron.

Diagram of Activation Function.

5. Define ANN.
An Artificial Neural Network (ANN) is a mathematical model that tries to simulate the structure
and functionalities of biological neural networks. Basic building block of every artificial neural
network is artificial neuron, that is, a simple mathematical model (function). Such a model has
three simple sets of rules: multiplication, summation and activation. At the entrance of artificial
neuron the inputs are weighted what means that every input value is multiplied with individual
weight. In the middle section of artificial neuron is sum function that sums all weighted inputs
and bias. At the exit of artificial neuron the sum of previously weighted inputs and bias is
passing through activation function.

Diagram of Artificial Neural Network.

DCST(3rd year, 6th sem) Page 8


[ARTIFICIAL NEURAL NETWORK] Unit3

6. What is Perceptron.

A perceptron is a basic fundamental model of learning in a neural network. It consisits of some


input values, weights and a bias, a weighted sum and an activation function. Multiple perceptrons
are added to make a complex neural network.
Each connection in a neural network has an associated weight, which changes in the course of
learning. According to it, an example of supervised learning, the network starts its learning by
assigning a random value to each weight. Calculate the output value on the basis of a set of
records for which we can know the expected output value. This is the learning sample that
indicates the entire definition. As a result, it is called a learning sample. The network then
compares the calculated output value with the expected value. Next calculates an error
function(E).

Diagram of perceptron.

7. Explain architecture of ANN with block diagram.


Artificial Neural Network primarily consists of three layers:

Input Layer: First is the input layer. This layer will accept the data and pass
it to the rest of the network.

Hidden Layer: The second type of layer is called the hidden layer. The hidden
layer presents in-between input and output layers. It performs all the calculations to find hidden
features and patterns. Hidden layers are either one or more in number for a neural
network

Output Layer: The last type of layer is the output layer. The input goes through
a series of transformations using the hidden layer, which finally results in output that is conveyed
using this layer. The output layer holds the result or the output of the problem.

DCST(3rd year, 6th sem) Page 9


[ARTIFICIAL NEURAL NETWORK] Unit3

Questions:
1. Application of ANN.
2. Advantage and disadvantage of ANN.
3. Explain layer network in ANN.
4. How face recognition system work.

DCST(3rd year, 6th sem) Page 10

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy