0% found this document useful (0 votes)
46 views

Guided Backpropagation

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
46 views

Guided Backpropagation

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 11

GUIDED

BACKPROPAG
ATION
PRESENTED
BY
Abishek Abraham M
Arumuganainar M
Samuel Jebasrit D
Jebaseelan Reginold P
Micheal Roshan A
Moses Jones Julian C
BACKPROPA
GATION
Backpropagation, short for "backward propagation of
errors," is an algorithm used to calculate the
gradient of the loss function of a neural network with
respect to its weights. It is essentially a method to
update the weights to minimize the loss.
Backpropagation is crucial because it tells us how to
change our weights to improve our network’s
performance.
GUIDED
BACKPROPA
GATION
Guided Backpropagation is a technique used to
visualize and interpret the decisions made by neural
networks, particularly Convolutional Neural Networks
(CNNs). It is an extension of the standard
backpropagation algorithm, designed to highlight the
input features that contribute most to the network's
predictions. Guided backpropagation modifies the
gradient computation during the backward pass to
focus only on positive influences, making it useful for
generating saliency maps and understanding model
behavior.
GUIDED
BACKPROPA
GATION
TWO MAIN
PHASES:
Forward Pass:
The input data is passed through the network, and
the output is computed.

Backward Pass:
The error between the predicted output and the
actual target is calculated, and this error is
propagated backward through the network to
update the weights.
STEPS
1. FORWARD PASS 3. BACKWARD PASS
1.Input Data 1.COMPUTE GRADIEnt of
2.Compute WEIGHTED Loss w.r.t. Output
SUM 2.Compute Gradient of
3.APPLY ACTIVATION Loss w.r.t. Activation
FUNCTION 3.Compute Gradient of
4.REPEAT FOR ALL Loss w.r.t. WEIGHTED
LAYERS SUM (Z)
4.COMPUTE GRADIENT
4. UPDAte WEIGHTS
2. COMPUTE LOSS AND OF LOSS W.R.T.
BIASES
5.CALCULATE LOSS WEIGHTS (W)
1.UPDATE WEIGHTS 5.COMPUTE GRADIENT
2.UPDATE BIASES OF LOSS W.R.T. BIAS
(B)
6.COMPUTE GRADIENT
OF LOSS W.R.T. INPUT
FORWARD
PASS
WEIGHTED SUM:
Z=W⋅X+B

ACTIVATION FUNCTION:
A=F(Z)

WHERE FF IS THE ACTIVATION FUNCTION


(E.G., RELU, SIGMOID, TANH).

OUTPUT:
Y=A(FOR THE FINAL LAYER)
BACKWARD
∂L / ∂y​
PASS
Gradient of Loss w.r.t. Output:
Gradient of Loss w.r.t. Bias
(b):
∂L / ∂b = ∂L / ∂z
Gradient of Loss w.r.t. Activation:

∂L / ∂a = ∂L / ∂y^⋅∂y^ / ∂a
Gradient of Loss w.r.t.
Input (x):
Gradient of Loss w.r.t. Weighted Sum (z):
∂L / ∂x = WT⋅∂L / ∂z
∂L / ∂z = ∂L / ∂a⋅f′(z)
where f′(z) is the derivative of the activation
Update Weights and
function.
Biases:
Wnew = Wold − η ⋅ ∂L / ∂W​
Gradient of Loss w.r.t. Weights (W):
bnew​= bold ​− η ⋅ ∂b / ∂L
∂L / ∂W = ∂L / ∂z⋅xT
APPLICATI
ONS
1.Image Recognition 1.Financial Forecasting
2.Natural Language Processing 2.Robotics
(NLP) 3.Generative Models
3.Speech Recognition 4.Time Series Analysis
4.Autonomous Vehicles 5.Reinforcement Learning
5.Recommendation Systems 6.Computer Vision
6.Game Playing 7.Real-Time Applications
7.Healthcare
THANK YOU

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy