Learning Vector Quantization

Download as pdf or txt
Download as pdf or txt
You are on page 1of 3

Deep Learning Tutorial Data Analysis Tutorial Python – Data visualization tutorial NumPy Pandas

Learning Vector Quantization


Last Updated : 07 Jan, 2023

Learning Vector Quantization ( or LVQ ) is a type of Artificial Neural


Network which also inspired by biological models of neural systems. It
is based on prototype supervised learning classification algorithm and
trained its network through a competitive learning algorithm similar to
Self Organizing Map. It can also deal with the multiclass classification
problem. LVQ has two layers, one is the Input layer and the other one is
the Output layer. The architecture of the Learning Vector Quantization
with the number of classes in an input data and n number of input
features for any sample is given below:

How Learning Vector Quantization works?

Let’s say that an input data of size ( m, n ) where m is the number of


training examples and n is the number of features in each example and
a label vector of size ( m, 1 ). First, it initializes the weights of size ( n, c )
from the first c number of training samples with different labels and
should be discarded from all training samples. Here, c is the number of
classes. Then iterate over the remaining input data, for each training
example, it updates the winning vector ( weight vector with the shortest
distance ( e.g Euclidean distance ) from the training example ).

The weight updation rule is given by:

if correctly_classified:
wij(new) = wij(old) + alpha(t) * (xik - wij(old))
else:
wij(new) = wij(old) - alpha(t) * (xik - wij(old))

where alpha is a learning rate at time t, j denotes the winning vector, i


denotes the ith feature of training example and k denotes the kth training
example from the input data. After training the LVQ network, trained
weights are used for classifying new examples. A new example is
labelled with the class of the winning vector.

Algorithm:

Step 1: Initialize reference vectors.

from a given set of training vectors, take the first “n” number of
clusters training vectors and use them as weight vectors, the
remaining vectors can be used for training.

Assign initial weights and classifications randomly


Step 2: Calculate Euclidean distance for i=1 to n and j=1 to m,

D(j) = ΣΣ (xi-Wij)^2

find winning unit index j, where D(j) is minimum

Step 3: Update weights on the winning unit wi using the following


conditions:

if T = J then wi(new) = wi (old) + α[x – wi(old)]

if T ≠ J then wi(new) = wi (old) – α[x – wi(old)]

Step 4: Check for the stopping condition if false repeat the above steps.

Below is the implementation.

Python3

import math

class LVQ :

# Function here computes the winning vector


# by Euclidean distance
def winner( self, weights, sample ) :

D0 = 0
D1 = 0

for i in range( len( sample ) ) :


D0 = D0 + math.pow( ( sample[i] - weights[0][i] ), 2 )
D1 = D1 + math.pow( ( sample[i] - weights[1][i] ), 2 )

if D0 > D1 :
return 0
else :
return 1

# Function here updates the winning vector


def update( self, weights, sample, J, alpha, actual ) :
if actual -- j:
for i in range(len(weights)) :
weights[J][i] = weights[J][i] + alpha * ( sample[i] - weig
else:
for i in range(len(weights)) :
weights[J][i] = weights[J][i] - alpha * ( sample[i] - weig

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy