0% found this document useful (0 votes)
119 views

Single Layer & Multilayer Perceptron

Uploaded by

divejdivej16
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
119 views

Single Layer & Multilayer Perceptron

Uploaded by

divejdivej16
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 14

Single Layer & Multilayer

Perceptron
• Single Layer Perceptron:

• A perceptron is the smallest element of a neural network.

• Perceptron is a single-layer neural network linear or a Machine Learning algorithm used


for supervised learning of various binary classifiers.

• It works as an artificial neuron to perform computations by learning elements and


processing them for detecting the business intelligence and capabilities of the input data.

• A perceptron network is a group of simple logical statements that come together to create
an array of complex logical statements, known as the neural network.
• The perceptron is a linear model used for binary classification.

• The Perceptron is one of the simplest ANN architectures, invented in 1957 by


Frank Rosenblatt.

• It is based on a slightly different artificial neuron called a linear threshold unit


(LTU): the inputs and output are now numbers (instead of binary on/off
values) and each input connection is associated with a weight.
• The LTU computes a weighted sum of its inputs
• (z = w1 x1 + w2 x2 + ⋯ + wn xn = wᵀ ・ x), then applies a step function to
that sum and outputs the result: hw(x) = step (z) = step (wᵀ ・ x ).
Perceptron with 2 inputs and three outputs
• The most common step function used in Perceptrons is the Heaviside step
function
• Sometimes the sign function is used instead.

Perceptron Example
Imagine a perceptron (in your brain).

The perceptron tries to decide if you should go to a concert.

Is the artist good? Is the weather good?

What weights should these facts have?


Criteria Input Weight

Artists is Good x1 = 0 or 1 w1 = 0.7

Weather is Good x2 = 0 or 1 w2 = 0.6

Friend will Come x3 = 0 or 1 w3 = 0.5

Food is Served x4 = 0 or 1 w4 = 0.3

Water is Served x5 = 0 or 1 w5 = 0.4


The Perceptron Algorithm
• Frank Rosenblatt suggested this algorithm:
1. Set a threshold value
2. Multiply all inputs with its weights
3. Sum all the results
4. Activate the output
1.Set a threshold value:
Threshold = 1.5
2. Multiply all inputs with its weights:
• x1 * w1 = 1 * 0.7 = 0.7
• x2 * w2 = 0 * 0.6 = 0
• x3 * w3 = 1 * 0.5 = 0.5
• x4 * w4 = 0 * 0.3 = 0
• x5 * w5 = 1 * 0.4 = 0.4
3. Sum all the results:
• 0.7 + 0 + 0.5 + 0 + 0.4 = 1.6 (The Weighted Sum)
4. Activate the Output:
• Return true if the sum > 1.5 ("Yes I will go to the Concert")
• Limitations of Perceptrons:
• Perceptron's, they are incapable of solving some trivial problems (e.g., the Exclusive OR
(XOR) classification problem.

• Limitations of Perceptron's can be eliminated by stacking multiple Perceptron’s (MLP).

• The output of a perceptron can only be a binary number (0 or 1) due to the hard limit
transfer function.

• Perceptron can only be used to classify the linearly separable sets of input vectors. If input
vectors are non-linear, it is not easy to classify them properly.
from __future__ import absolute_import
from __future__ import division
from __future__ import
import tensorflow as tf
import numpy as np

# Data sets
IRIS_TRAINING = "iris_training.csv“
IRIS_TEST = "iris_test.csv"

# Load datasets.
training_set = tf.contrib.learn.datasets.base.load_csv(filename=IRIS_TRAINING,
target_dtype=np.int)
test_set = tf.contrib.learn.datasets.base.load_csv(filename=IRIS_TEST, target_dtype=np.int)
# Specify that all features have real-value data
feature_columns = [tf.contrib.layers.real_valued_column("", dimension=4)]
# Build 3 layer DNN with 10, 20, 10 units respectively.
classifier = tf.contrib.learn.DNNClassifier(feature_columns=feature_columns,
hidden_units=[10, 20, 10], n_classes=3)
# Fit model. classifier.fit(x=training_set.data, y=training_set.target,
steps=2000)
# Evaluate accuracy. accuracy_score = classifier.evaluate(x=test_set.data,
y=test_set.target)["accuracy"] print('Accuracy:
{0:f}'.format(accuracy_score))
# Classify two new flower samples.
new_samples = np.array( [[6.4, 3.2, 4.5, 1.5], [5.8, 3.1, 5.0, 1.7]],
dtype=float) y = classifier.predict(new_samples)
print('Predictions: {}'.format(str(y)))

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy