ML Unit 3 Notes
ML Unit 3 Notes
ML Unit 3 Notes
whether the email is spam (1) or not (0). As the network iteratively refines its weights through
back propagation, it becomes adept at distinguishing between spam and legitimate emails,
showcasing the practicality of neural networks in real-world applications like email filtering.
Input Layer:
First is the input layer. This layer will accept the data and pass it to the rest of
the network.
Hidden Layer:
The second type of layer is called the hidden layer. The hidden layer presents in-
between input and output layers. It performs all the calculations to find hidden features
and patterns. Hidden layers are either one or more in number for a neural
network
Output Layer:
The last type of layer is the output layer. The input goes through a series of
transformations using the hidden layer, which finally results in output that is conveyed
using this layer. The output layer holds the result or the output of the problem.
The artificial neural network takes input and computes the weighted sum of the inputs and
includes a bias. This computation is represented in the form of a linear function.
It determines weighted total is passed as an input to an activation function to produce the output.
Activation functions choose whether a node should fire or not. Only those who are fired make it
to the output layer. There are distinctive activation functions available that can be applied upon
the sort of task we are performing.
Activation Function: An activation function is a mathematical equation that determines the output of
each element in the neural network. It takes in the input from each neuron and transforms it into
an output, usually between one and zero or between -1 and one. It may be defined as the extra
force or effort applied over the input to obtain an exact output. In ANN, we can also apply
activation functions over the input to get the exact output.
Activation function decides whether a neuron should be activated or not by calculating the
weighted sum and further adding bias to it. The purpose of the activation function is to
introduce non-linearity into the output of a neuron.
1. Single Layer Network: A single layer neural network contains input and output layer, and
one hidden layer between input and output layer. The input layer receives the input signals
and the output layer generates the output signals accordingly.
2. Multi Layer Network: A multi layer ANN having more than one hidden layer. As this
network has more layers between the input and the output layer, it is called hidden
layers.
Each connection in a neural network has an associated weight, which changes in the course of
learning. According to it, an example of supervised learning, the network starts its learning by
assigning a random value to each weight. Calculate the output value on the basis of a set of
records for which we can know the expected output value. This is the learning sample that
indicates the entire definition. As a result, it is called a learning sample. The network then
compares the calculated output value with the expected value. Next calculates an error
function(E).
The back propagation algorithm is used to search a large hypothesis space of all weights of the ANN
network. In back propagation algorithm , the total losses are back propagated into the neural network
to know the loss of each node. Then the weight of each node is updated to minimize the loss by each
node’s. Ann uses back propagation as a machine learning algorithm to compute a gradient descent with
respect to weights.
Step1 : in first step inputs are arrived through the connected path.
Step2 : The input is modeled using true weights W. Weights are usually
Chosen randomly.
Step3 : Calculate the output of each neuron from the input layer to the
Hidden layer to the output layer.
Step4 : Calculate the error in the outputs.
Step5 : From the output layer, go back to the hidden layer to adjust the
Weights to reduce the error.
Step6 : Repeat the process until the desired output is achieved.
Application of ANN:
a) Stock price prediction.
b) Fingerprint recognition.
c) Loan application approval prediction.
d) Autonomous vehicle driving using ANN.
Advantages of ANN:
a) Pattern Recognition : Their proficiency in pattern recognition renders them efficacious in
tasks like as audio and image identification, natural language processing, and other
intricate data patterns.
b) Parallel processing capability : Artificial neural networks have a numerical value that can
perform more than one task simultaneously.
c) Non-linearity : Neural networks are able to model and comprehend complicated
relationships in data by virtue of the non-linear activation functions found in neurons,
which overcome the drawbacks of linear models.
d) Work with incomplete knowledge : After ANN training, the information may produce output
even with inadequate data. The loss of performance here relies upon the significance of
missing data.
Disadvantages of ANN:
a) Requirement for large dataset : For efficient training, artificial neural networks need large
datasets; otherwise, their performance may suffer from incomplete data.
b) Computational Power : Large neural network training can be a laborious and
computationally demanding process that demands a lot of computing power.
c) Need of proper network structure : There is no particular method to determining the
structure of artificial neural networks. The appropriate network structure is accomplished
through experience, trial, and error.
d) Process duration is unknown : The network is reduced to a specific value of the error, and
this value does not give us optimum results.
Face Recognition:
A general face recognition system includes four steps: face detection, preprocessing, feature extraction,
and face recognition.
a) Face detection : The main function of this step is to detect the face from capture image or the
selected image from the database. This face detection process actually verifies that weather the
given image has face image or not, after detecting the face this output will be further given to
the pre-processing step.
b) Pre-processing : This step is working as the pre-processing for face recognition, In this step the
unwanted noise, blur, varying lightening condition, shadowing effects can be remove using pre-
processing techniques .once we have fine smooth face image then it will be used for the feature
extraction process.
c) Face extraction : In this step features of face can be extracted using feature extraction
algorithm. Extractions are performed to do information packing, dimension reduction, and
noise cleaning. After this step, a face patch is usually transformed into a vector with fixed
dimension.
d) Face recognition : Once feature extraction is done step analyzes the representation of each
face, this last step is used to recognize the identities of the faces for achieving the automatic
face recognition, for the recognition a face database is required to build. For each person,
several images are taken and their features are extracted and stored in the database. Then
when an input face image comes for recognition, then it first performs face detection,
preprocessing and feature extraction, after that it compare its feature to each face class which
stored in the database.
The back propagation algorithm is used to search a large hypothesis space of all weights of the
ANN network. In back propagation algorithm , the total losses are back propagated into the
neural network to know the loss of each node. Then the weight of each node is updated to
minimize the loss by each node’s. Ann uses back propagation as a machine learning algorithm to
compute a gradient descent with respect to weights.
An activation function is a mathematical equation that determines the output of each element
in the neural network. It takes in the input from each neuron and transforms it into an output,
usually between one and zero or between -1 and one. It may be defined as the extra force or
effort applied over the input to obtain an exact output. In ANN, we can also apply activation
functions over the input to get the exact output.
Activation function decides whether a neuron should be activated or not by calculating the
weighted sum and further adding bias to it. The purpose of the activation function is to
introduce non-linearity into the output of a neuron.
5. Define ANN.
An Artificial Neural Network (ANN) is a mathematical model that tries to simulate the structure
and functionalities of biological neural networks. Basic building block of every artificial neural
network is artificial neuron, that is, a simple mathematical model (function). Such a model has
three simple sets of rules: multiplication, summation and activation. At the entrance of artificial
neuron the inputs are weighted what means that every input value is multiplied with individual
weight. In the middle section of artificial neuron is sum function that sums all weighted inputs
and bias. At the exit of artificial neuron the sum of previously weighted inputs and bias is
passing through activation function.
6. What is Perceptron.
Diagram of perceptron.
Input Layer: First is the input layer. This layer will accept the data and pass
it to the rest of the network.
Hidden Layer: The second type of layer is called the hidden layer. The hidden
layer presents in-between input and output layers. It performs all the calculations to find hidden
features and patterns. Hidden layers are either one or more in number for a neural
network
Output Layer: The last type of layer is the output layer. The input goes through
a series of transformations using the hidden layer, which finally results in output that is conveyed
using this layer. The output layer holds the result or the output of the problem.
Questions:
1. Application of ANN.
2. Advantage and disadvantage of ANN.
3. Explain layer network in ANN.
4. How face recognition system work.