0% found this document useful (0 votes)
8 views5 pages

ML UNIT 2

The document provides an overview of Artificial Neural Networks (ANNs), explaining their structure, representation, and appropriate learning problems. It discusses the Perceptron algorithm for binary classification, multilayer networks, and the backpropagation algorithm for improving prediction accuracy. Additionally, it touches on advanced topics in ANNs and evaluation hypotheses related to learning and classification accuracy.

Uploaded by

csedept20
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views5 pages

ML UNIT 2

The document provides an overview of Artificial Neural Networks (ANNs), explaining their structure, representation, and appropriate learning problems. It discusses the Perceptron algorithm for binary classification, multilayer networks, and the backpropagation algorithm for improving prediction accuracy. Additionally, it touches on advanced topics in ANNs and evaluation hypotheses related to learning and classification accuracy.

Uploaded by

csedept20
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 5

UNIT-2

Artificial neural Networks-1:

Introduction: The term "Artificial Neural Network" is derived from Biological neural networks that
develop the structure of a human brain. Similar to the human brain that has neurons interconnected to one
another, artificial neural networks also have neurons that are interconnected to one another in various layers of
the networks. These neurons are known as nodes.

Neural network representation: Artificial Neural Network can be best represented as a


weighted directed graph, where the artificial neurons form the nodes. The association between the
neurons outputs and neuron inputs can be viewed as the directed edges with weights. The Artificial
Neural Network receives the input signal from the external source in the form of a pattern and image
in the form of a vector. These inputs are then mathematically assigned by the notations x(n) for every
n number of inputs.

Appropriate problems for neutral network learning:


Instances are represented by many attribute-value pairs.

• The target function output may be discrete-valued, real-valued, or a vector of several real-valued or
discrete-valued attributes.

• The training examples may contain errors.

Department of CSE Page 1 of 5


• Long training times are acceptable.

• Fast evaluation of the learned target function may be required.

• The ability of humans to understand the learned target function is not important.

Perceptions:
Perceptron is Machine Learning algorithm for supervised learning of various binary classification
tasks. Further, Perceptron is also understood as an Artificial Neuron or neural network unit that helps
to detect certain input data computations in business intelligence.

Multilayer networks and the back-propagation algorithm:


Multilayer networks solve the classification problem for non linear sets by employing hidden layers,
whose neurons are not directly connected to the output. The additional hidden layers can be interpreted
geometrically as additional hyper-planes, which enhance the separation capacity of the network.

Department of CSE Page 2 of 5


Backpropagation algorithm:
Backpropagation (backward propagation) is an important mathematical tool for improving the
accuracy of predictions in data mining and machine learning. Essentially, backpropagation is an
algorithm used to calculate derivatives quickly.

1. Input layer
2. Hidden layer
3. Output layer

Department of CSE Page 3 of 5


This image summarizes the functioning of the backpropagation approach.

1. Input layer receives x


2. Input is modeled using weights w
3. Each hidden layer calculates the output and data is ready at the output layer
4. Difference between actual output and desired output is known as the error
5. Go back to the hidden layers and adjust the weights so that this error is reduced in fut ure
runs
This process is repeated till we get the desired output. The training phase is done with
supervision. Once the model is stable, it is used in production.

Department of CSE Page 4 of 5


Artificial Neural Networks-2

Remarks on the Back-Propagation algorithm:


1.convergence and local minima
2.Representation power of feed forward networks
3.hypothesis space seach and inductive bias
4.hidden layer representation
5.Generalization ,overfitting,stopping criterian

An illustrative example: face recognition, advanced topics in artificial neural


networks:

Neural nets for face recognition

Training images : 20 different persons with 32 images per person. – (120x128 resolution → 30x32
pixel image) – After 260 training images, the network achieves an accuracy of 90% over a separate
test set. – Algorithm parameters : η=0.3, α=0.3
Advanced topics in artificial neural networks:

An introduction to some advanced neural network topics such as snapshot ensembles, dropout, bias
correction, and cyclical learning rates.
Evaluation Hypotheses:
Motivation: Motivation is a condition that activates and sustains behavior toward a goal. It is critical
to learning and achievement across the life span in both informal settings and formal learning
environments.

 Estimating the accuracy with which it will classify future instances - also probable error of this
accuracy estimate
 A space of possible instances . Different instances in may be encountered with different
frequencies which is modeled by some unknown probability distribution . Notice says nothing
about whether is a positive or negative example. The learning task is to learn the target
concept, , by considering a space of possible hypothesis. Training examples of the target
function are provided to the learner by a trainer who draws each instance independently,
according to the distribution and who then forwards the instance along with the correct target
value to the learner.
 Are instances ever really drawn independently?
 Sample error - the fraction of instances in some sample that it misclassifies , where is the
number of samples in , and is 1 if , and 0 otherwise
 True error - probability it will misclassify a single randomly drawn instance from the distribution,
where denotes that the probability is taken over the instance distribution.
 Really want but can only get.

Department of CSE Page 5 of 5

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy