DL+lect+4 (1)
DL+lect+4 (1)
2024/2025
Deep Learning for Computer Vision
Lecture 4:
Outlines
Introduction
Models
inspired biological nervous systems work (e.g., how the brain processes information).
The paradigm has a novel structure that allows for information processing.
they introduced a simplified computational model describing how biological neurons might work in
Frank Rosenblatt (1958), presented the most influential masterpiece on neural nets, in the early 60th,
"The Perceptron's".
Nowadays, the ANNs field enjoys great interest and an increase in funding.
Why ANN?
ANN applications can act as an "expert" in information for analysis and decision-making. Other advantages include:
ANNs is very powerful to provide true interpretation given complex or imprecise data.
ANNs can be employed to extract features and patterns to detect paradigms that could be too complex for
ANNs are made from the number of processing units (neurons) in the form of layers working in parallel at any
given time.
ANNs have been widely recognized as adaptive learning techniques they can learn how to perform tasks using a
typically. a neuron collects signals from others via a host of fine structures known as
dendrites.
each neuron transmits out spikes of electrical signals via a long strand called an axon (split
into thousands of branches).
a structure termed a synapse, at the end of each branch, is responsible for converting axon
once a neuron receives an exciting input comparatively large to the inhibiting input, the
systems.
Human brains are composed of densely interconnected nerve cells, known to be basic
Human brains contain about 10 billion neurons and approximately 60 trillion synapses.
Every neuron has a simple structure as depicted above and the collection such elements
The perceptron appeared as a model (neuron with weighted inputs) combined with additional
per-processing.
For the case of working with images. labeled units are known as association units, tasked
with extracting specific localized features from the input images.
The summation function is added to a bias to account for drift/bias in data (up or down)
The summation function along with the bias is forwarded to an activation function which
fires/activates
Other model
Architecture of Neural Networks: A single
neuron structure
Nonlinear model of a neuron (processing element)
Other model
Activation Function
NN is made of layers.
problem in hand
6. Compare the predicted data to the target output value using a loss function
7. Apply backpropagation to propagate the loss value back through the network
8. Update the weights and biases based on the loss function in a way that the total loss is
1. Multilayer Perceptron
3. Kohonen
4. Linear
5. Hopfield
6. Adaline/Madaline
This way, the learning process is similar to a human supervising the whole process.
During training, the supervised learning algorithm searches for the patterns that best
correlate input to desired/target output.
Supervised learning algorithms: classification and
regression
Classification
• Input data are assigned to a class or a category (e.g., email spam or not spam).
• •The designed model relies on well-labeled data and relates them to the class through a
mapping function.
Regression
• It is a predictive statistical process where the model tries to map a function or a relationship
Example:
Supervised learning algorithms: classification and
regression
Classification
• Input data are assigned to a class or a category (e.g., email spam or not spam).
• •The designed model relies on well-labeled data and relates them to the class through a
mapping function.
Regression
• It is a predictive statistical process where the model tries to map a function or a relationship
Example:
Neural Networks: Unsupervised learning
algorithms
• Unsupervised learning algorithms use manifest underlying patterns of the training data.
• The goal of unsupervised learning algorithms is to analyze data and extract important
features.
• They are extremely useful as they may find hidden features and patterns within the data that
• One goal of unsupervised learning is to find similarities and differences between data points.
• Model is a mathematical relationship that defines the relation between output and input
• The model is defined in form of function which can be linear or nonlinear
• The function ties together input data (known) and parameters called weights and biases
(unknown)
• Model parameters (weights and biases) are estimated using training dataset (input data and
desired or target output), commonly known as offline approximation
• Model parameters (weights and biases) can also be estimated online.
Before we move further: NN challenging questions
• How many hidden layers should be selected? No right answer to this question.
• How many neurons in each layer should be selected? No right answer to this question.
• In case of training, what is the number of epochs and batches? No right answer to this
question.