0% found this document useful (0 votes)
15 views

Presentation for deep learning

Uploaded by

kaushiksandhu115
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
15 views

Presentation for deep learning

Uploaded by

kaushiksandhu115
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 15

introduction

How human brain works?


 the brain breaks down an object into its
individual properties, compares them with
those that are already known, and puts these
properties back together.

Depending on how similar the


observed object is to a known
category, it is then recognised
as a piece of furniture or a
vessel.
Working of neuron
What is NEURAL
NETWORK?
 A neural network is a method in artificial
intelligence that teaches computers to
process data in a way that is inspired by the
human brain.
Activation functions:
 The activation function decides whether a neuron
should be activated or not by calculating the
weighted sum and further adding bias to it. The
purpose of the activation function is to introduce
non-linearity into the output of a neuron.
TYPES OF ACTIVATION
FUNCTION:
1.
• LINEAR FUNCTION

2.
• Sigmoid Function

3.
• Tanh Function

4.
• RELU Function
1. LINEAR FUNCTION:
 Equation : Linear function has the equation
similar to as of a straight line i.e. y = x
 No matter how many layers we have, if all are
linear in nature, the final activation function of
last layer is nothing but just a linear function of
the input of first layer.
 Range : -inf to +inf
 Uses : Linear activation function is used at
just one place i.e. output layer.
2. Sigmoid Function

 It is a function which is plotted as ‘S’ shaped graph.


 Equation : A = 1/(1 + e-x)
 Nature : Non-linear. Notice that X values lies
between -2 to 2, Y values are very steep. This
means, small changes in x would also bring about
large changes in the value of Y.
 Value Range : 0 to 1
 Uses : Usually used in output layer of a binary
classification, where result is either 0 or 1, as value
for sigmoid function lies between 0 and 1 only so,
result can be predicted easily to be 1 if value is
greater than 0.5 and 0 otherwise.
3. Tanh Function

 The activation that works almost always better


than sigmoid function is Tanh function also
known as Tangent Hyperbolic function. It’s
actually mathematically shifted version of the
sigmoid function. Both are similar and can be
derived from each other.
 Equation :-
f(x) = tanh(x) = 2/(1 + e-2x) – 1
OR
tanh(x) = 2 * sigmoid(2x) – 1
 Value Range :- -1 to +1
 Nature :- non-linear
4. RELU Function

 It Stands for Rectified linear unit. It is the most


widely used activation function. Chiefly
implemented in hidden layers of Neural network.
 Equation :- A(x) = max(0,x). It gives an output
x if x is positive and 0 otherwise.
 Value Range :- [0, inf)
 Nature :- non-linear, which means we can easily
backpropagate the errors and have multiple
layers of neurons being activated by the ReLU
function.
 In simple words, RELU learns much faster than
sigmoid and Tanh function

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy