ML UNIT 2
ML UNIT 2
Introduction: The term "Artificial Neural Network" is derived from Biological neural networks that
develop the structure of a human brain. Similar to the human brain that has neurons interconnected to one
another, artificial neural networks also have neurons that are interconnected to one another in various layers of
the networks. These neurons are known as nodes.
• The target function output may be discrete-valued, real-valued, or a vector of several real-valued or
discrete-valued attributes.
• The ability of humans to understand the learned target function is not important.
Perceptions:
Perceptron is Machine Learning algorithm for supervised learning of various binary classification
tasks. Further, Perceptron is also understood as an Artificial Neuron or neural network unit that helps
to detect certain input data computations in business intelligence.
1. Input layer
2. Hidden layer
3. Output layer
Training images : 20 different persons with 32 images per person. – (120x128 resolution → 30x32
pixel image) – After 260 training images, the network achieves an accuracy of 90% over a separate
test set. – Algorithm parameters : η=0.3, α=0.3
Advanced topics in artificial neural networks:
An introduction to some advanced neural network topics such as snapshot ensembles, dropout, bias
correction, and cyclical learning rates.
Evaluation Hypotheses:
Motivation: Motivation is a condition that activates and sustains behavior toward a goal. It is critical
to learning and achievement across the life span in both informal settings and formal learning
environments.
Estimating the accuracy with which it will classify future instances - also probable error of this
accuracy estimate
A space of possible instances . Different instances in may be encountered with different
frequencies which is modeled by some unknown probability distribution . Notice says nothing
about whether is a positive or negative example. The learning task is to learn the target
concept, , by considering a space of possible hypothesis. Training examples of the target
function are provided to the learner by a trainer who draws each instance independently,
according to the distribution and who then forwards the instance along with the correct target
value to the learner.
Are instances ever really drawn independently?
Sample error - the fraction of instances in some sample that it misclassifies , where is the
number of samples in , and is 1 if , and 0 otherwise
True error - probability it will misclassify a single randomly drawn instance from the distribution,
where denotes that the probability is taken over the instance distribution.
Really want but can only get.