Presentation for deep learning
Presentation for deep learning
2.
• Sigmoid Function
3.
• Tanh Function
4.
• RELU Function
1. LINEAR FUNCTION:
Equation : Linear function has the equation
similar to as of a straight line i.e. y = x
No matter how many layers we have, if all are
linear in nature, the final activation function of
last layer is nothing but just a linear function of
the input of first layer.
Range : -inf to +inf
Uses : Linear activation function is used at
just one place i.e. output layer.
2. Sigmoid Function