0% found this document useful (0 votes)
19 views41 pages

DL+lect+4 (1)

Uploaded by

knada1786
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
19 views41 pages

DL+lect+4 (1)

Uploaded by

knada1786
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 41

Deep Learning for Computer Vision

Dr. Basma M. Hassan


Faculty of Artificial Intelligence
Kafrelsheikh University

2024/2025
Deep Learning for Computer Vision

Lecture 4:
Outlines
 Introduction

 Architecture Of Neural Networks

 Models

 Before we move further: NN challenging questions


Introduction
 An Artificial Neural Network (ANN) refers to an information processing paradigm

inspired biological nervous systems work (e.g., how the brain processes information).

 The paradigm has a novel structure that allows for information processing.

 The paradigm is composed of many highly interconnected neurons (processing

elements) and they work in harmony to address specific problems.

 Evolution and learning in biological systems include adjustments to the synaptic

connections (weights) between the neurons.


Background
 ANNs were first presented in 1943 through the neurophysiologist W. McCulloch and the mathematician

W. Pitts, in "A Logical Calculus of Ideas Immanent in Nervous Activity,"

 they introduced a simplified computational model describing how biological neurons might work in

brains to do computational tasks using propositional logic.

 Frank Rosenblatt (1958), presented the most influential masterpiece on neural nets, in the early 60th,

"The Perceptron's".

 The use of ANN witnessed a period of frustration and disrepute.

 At that time, funding and professional support was minimal.

 Thus, research advances in ANNs were made by relatively few researchers.

 Nowadays, the ANNs field enjoys great interest and an increase in funding.
Why ANN?
 ANN applications can act as an "expert" in information for analysis and decision-making. Other advantages include:

 ANNs is very powerful to provide true interpretation given complex or imprecise data.

 ANNs can be employed to extract features and patterns to detect paradigms that could be too complex for

humans or typical computer techniques.

 ANNs are made from the number of processing units (neurons) in the form of layers working in parallel at any

given time.

 ANNs are model-free systems.

 ANNs have been widely recognized as adaptive learning techniques they can learn how to perform tasks using a

set of training data or initial experience.

 ANNs can be employed for real-time operations

 ANN computations can be performed in parallel


Why ANN?
 There is still too much information unknown about how the brain trains itself to process
information. In human brains:

 typically. a neuron collects signals from others via a host of fine structures known as

dendrites.
 each neuron transmits out spikes of electrical signals via a long strand called an axon (split
into thousands of branches).

 a structure termed a synapse, at the end of each branch, is responsible for converting axon

activities to electrical effects inhibiting or exciting activities in the connected neurons.

 once a neuron receives an exciting input comparatively large to the inhibiting input, the

neuron transmits a spike of electrical signals to its axon.


 it is commonly known that learning happens by changing the synapse effect such that the
effect of one neuron on another changes.
Humans and AN
Humans and AN
Humans and AN
Why ANN?
 Human brains are highly complex, nonlinear, and have parallel information-processing

systems.

 Human brains are composed of densely interconnected nerve cells, known to be basic

information-processing units, and are termed neurons.

 Human brains contain about 10 billion neurons and approximately 60 trillion synapses.

 Every neuron has a simple structure as depicted above and the collection such elements

results a tremendous computational and processing power.


From Human Neurons to Artificial Neurons
 An artificial neuron is a computational model-inspired

by the human brain and can be presented as follows:

 Which in reality (computer) are represented as follows:


Artificial Neurons
 An artificial neuron is a system or model that has many inputs and one output.

 The neuron is characterized by two modes of operation:


• the training mode and
• the using mode.
• In training mode, the neurons are trained via particular input patterns.

 In using mode. take an action based on input data.


Artificial Neuron Perceptron
 Frank Rosenblatt (1958), presented the most influential masterpiece on neural nets. in the
early 60th, "The Perceptrons".

 The perceptron appeared as a model (neuron with weighted inputs) combined with additional
per-processing.

 For the case of working with images. labeled units are known as association units, tasked
with extracting specific localized features from the input images.

 Perceptrons were mainly employed in pattern recognition. despite of their capabilities to do a


lot more.
Architecture of Neural Networks: A single
neuron structure
 A set of input data

 The input data are weighted by adjusting mechanism

 Forwarded to a summation function

 The summation function is added to a bias to account for drift/bias in data (up or down)

 The summation function along with the bias is forwarded to an activation function which

 fires/activates

• a class selection (O or 1). or

• a more analog value: linear or nonlinear output

 The activation function generates the neuron output


Architecture of Neural Networks: A single
neuron structure
 Nonlinear model of a neuron (processing element)

 The output of a neuron is a parameterized

non-linear function of its inputs

 Other model
Architecture of Neural Networks: A single
neuron structure
 Nonlinear model of a neuron (processing element)

 The output of a neuron is a parameterized

non-linear function of its inputs

 Learning occurs by adjusting parameters (weights) to fit data.

 Other model
Activation Function
 NN is made of layers.

 The activation function of a single neuron contributes to the next layer.

 The activation function fires/activates

 a class selection to 1 if it is greater than a threshold. Otherwise, keep the output O or


Activation Function
Network Layers
Network Layers
Network Layers
Feed-forward Network
Feedback Network
Learning Process
Learning Process
Learning Process Algorithm
1. Specify the number of layers and number of neurons in each layer according to 1) the

problem in hand

2. as well as 2) our experience

3. Select the activation function(s) suitable for our problem

4. Initialize weights and biases, randomly (random values)

5. Feed the training data to our NN

6. Compare the predicted data to the target output value using a loss function

7. Apply backpropagation to propagate the loss value back through the network

8. Update the weights and biases based on the loss function in a way that the total loss is

reduced and a better model is obtained


Learning Process Algorithm
Types of Networks

1. Multilayer Perceptron

2. Radial Basis Function

3. Kohonen

4. Linear

5. Hopfield

6. Adaline/Madaline

7. Probabilistic Neural Network (PNN)


Neural Networks Classes of ANNs

ANNs can be classified according to:

1. Architectures (e.g. feed-forward, recurrent)

2. Learning paradigm (e.g., supervised, unsupervised, semi supervised, reinforcement)

3. Activation functions (binary and continuous)

4. e Implementation (software and hardware)


Supervised learning algorithms: classification and
regression
• Algorithms are designed to learn by examples.

• NN models are trained using labeled data


• input data (part of training dataset in the form of vector(s))
• desired/target data (part of training dataset in the form Of vector(s))

 This way, the learning process is similar to a human supervising the whole process.
 During training, the supervised learning algorithm searches for the patterns that best
correlate input to desired/target output.
Supervised learning algorithms: classification and
regression
Classification

• Input data are assigned to a class or a category (e.g., email spam or not spam).

• •The designed model relies on well-labeled data and relates them to the class through a

mapping function.

• Accordingly, the mapping function is employed to classify the unseen data.

Regression

• It is a predictive statistical process where the model tries to map a function or a relationship

between dependent and independent variables.

Example:
Supervised learning algorithms: classification and
regression
Classification

• Input data are assigned to a class or a category (e.g., email spam or not spam).

• •The designed model relies on well-labeled data and relates them to the class through a

mapping function.

• Accordingly, the mapping function is employed to classify the unseen data.

Regression

• It is a predictive statistical process where the model tries to map a function or a relationship

between dependent and independent variables.

Example:
Neural Networks: Unsupervised learning
algorithms
• Unsupervised learning algorithms use manifest underlying patterns of the training data.

• Unsupervised learning algorithms do not use labeled data.

• The goal of unsupervised learning algorithms is to analyze data and extract important

features.

• They are extremely useful as they may find hidden features and patterns within the data that

humans may not observe

Unsupervised learning algorithms can be employed for


 clustering
 association
 adaptive learning

• One goal of unsupervised learning is to find similarities and differences between data points.

• Another goal is to learn how to do a task without a teacher/expert, unlike supervised


Unsupervised learning algorithms: Clustering
Unsupervised learning algorithms: Association

• It is the process of finding a relationship between different entities.


• Example: Finding items that are bought together in coffee shops (coffee, cheese muffin, and
chocolate dip).
Reinforcement Learning
Models

• Model is a mathematical relationship that defines the relation between output and input
• The model is defined in form of function which can be linear or nonlinear
• The function ties together input data (known) and parameters called weights and biases
(unknown)
• Model parameters (weights and biases) are estimated using training dataset (input data and
desired or target output), commonly known as offline approximation
• Model parameters (weights and biases) can also be estimated online.
Before we move further: NN challenging questions
• How many hidden layers should be selected? No right answer to this question.

• How many neurons in each layer should be selected? No right answer to this question.

• Which activation function should be selected? No right answer to this question.

• In case of training, what is the number of epochs and batches? No right answer to this

question.

• Experience and intuition will be helpful.

• Also, try and error approach is always considered.


Models
Single Layer Perceptron is also called Rosenblatt
model
• A trainable model of ANNs proposed by Frank Rosenblatt (1961) at Cornell University.
• He applied supervised learning to adjust the ANNs weights in consideration of an error signal
• calculated between the actual output and the reference output.
• It was designed for pattern classification of linearly separable patterns.
Models: Perceptron Learning Algorithm

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy