Kamal ML
Kamal ML
Kamal ML
Lab File
Machine Learning Lab
(KAI651)
Submitted To : Submitted By :
Faculty Name : Kshmta Chauhan Name : Kamal Nayan Yagik
Designation : Assistant Professor Roll No. : 2101641520077
Section : CS-AI-3A
Table of Contents
• Vision and Mission Statements of the Institute
• List of Experiments
• Index
Department Vision Statement
To be a recognized Department of Computer Science & Engineering that produces versatile
computer engineers, capable of adapting to the changing needs of computer and related industry.
i. To provide broad based quality education with knowledge and attitude to succeed in Computer
Science & Engineering careers.
ii. To prepare students for emerging trends in computer and related industry.
iii. To develop competence in students by providing them skills and aptitude to foster culture of
continuous and lifelong learning.
iv. To develop practicing engineers who investigate research, design, and find workable solutions to
complex engineering problems with awareness & concern for society as well as environment.
ii. Graduates will possess capability of designing successful innovative solutions to real life problems
that are technically sound, economically viable and socially acceptable.
iii. Graduates will be competent team leaders, effective communicators and capable of working in
multidisciplinary teams following ethical values.
iv. The graduates will be capable of adapting to new technologies/tools and constantly upgrading their
knowledge and skills with an attitude for lifelong learning
Department Program Outcomes (POs)
The students of Computer Science and Engineering Department will be able:
1. Engineering knowledge: Apply the knowledge of mathematics, science, Computer Science &
Engineering fundamentals, and an engineering specialization to the solution of complex engineering
problems.
2. Problem analysis: Identify, formulate, review research literature, and analyze complex
engineering problems reaching substantiated conclusions using first principles of mathematics,
natural sciences, and Computer Science & Engineering sciences.
5. Modern tool usage: Create, select, and apply appropriate techniques, resources, and modern
engineering and IT tools including prediction and modelling to complex Computer Science &
Engineering activities with an understanding of the limitations.
6. The Engineering and Society: Apply reasoning informed by the contextual knowledge to assess
societal, health, safety, legal and cultural issues and the consequent responsibilities relevant to the
professional engineering practice in the field of Computer Science and Engineering.
7. Environment and sustainability: Understand the impact of the professional Computer Science
& Engineering solutions in societal and environmental contexts, and demonstrate the knowledge of,
and need for sustainable development.
8. Ethics: Apply ethical principles and commit to professional ethics and responsibilities and norms
of the Computer Science & Engineering practice.
9. Individual and team work: Function effectively as an individual, and as a member or leader in
diverse teams, and in multidisciplinary settings.
11. Project management and finance: Demonstrate knowledge and understanding of the
Computer Science & Engineering and management principles and apply these to one’s own work,
as a member and leader in a team, to manage projects and in multidisciplinary environments.
12. Life-long learning: Recognize the need for, and have the preparation and ability to engage in
independent and life-long learning in the broadest context of technological change.
Department Program Specific Outcomes (PSOs)
The students will be able to:
2. Understand the processes that support the delivery and management of information systems
within a specific application environment.
Course Outcomes
Training Examples:
Program:
import csv
num_attributes = 6
a = []
print("\n The Given Training Data Set \n")
with open('enjoysport.csv', 'r') as csvfile:
reader = csv.reader(csvfile)
for row in reader:
a.append (row)
print(row)
print("\n The initial value of hypothesis: ")
hypothesis = ['0'] * num_attributes
print(hypothesis)
for j in range(0,num_attributes):
hypothesis[j] = a[0][j];
print("\n Find S: Finding a Maximally Specific Hypothesis\n")
for i in range(0,len(a)):
if a[i][num_attributes]=='yes':
for j in range(0,num_attributes):
if a[i][j]!=hypothesis[j]:
hypothesis[j]='?'
else :
hypothesis[j]= a[i][j]
print(" For Training instance No:{0} the hypothesis is ".format(i),hypothesis)
print("\n The Maximally Specific Hypothesis for a given Training Examples :\n")
print(hypothesis)
Data Set:
Output:
Program:-
import numpy as np
import pandas as pd
data = pd.read_csv('enjoysport.csv')
concepts = np.array(data.iloc[:,0:-1])
print(concepts)
target = np.array(data.iloc[:,-1])
print(target)
def learn(concepts, target):
specific_h = concepts[0].copy()
print("initialization of specific_h and general_h")
print(specific_h)
general_h = [["?" for i in range(len(specific_h))] for i in range(len(specific_h))]
print(general_h)
for i, h in enumerate(concepts):
print("For Loop Starts")
if target[i] == "yes":
print("If instance is Positive ")
for x in range(len(specific_h)):
if h[x]!= specific_h[x]:
specific_h[x] ='?'
general_h[x][x] ='?'
if target[i] == "no":
print("If instance is Negative ")
for x in range(len(specific_h)):
if h[x]!= specific_h[x]:
general_h[x][x] = specific_h[x]
else:
general_h[x][x] = '?'
indices = [i for i, val in enumerate(general_h) if val == ['?', '?', '?', '?', '?', '?']]
for i in indices:
general_h.remove(['?', '?', '?', '?', '?', '?'])
return specific_h, general_h
OUTPUT:
Final Specific_h:
['sunny' 'warm' '?' 'strong' '?' '?']
Final General_h:
[['sunny', '?', '?', '?', '?', '?'], ['?', 'warm', '?', '?', '?', '?']]
Experiment-3
Write a program to demonstrate the working of the decision tree based ID3 algorithm. Use an
appropriate data set for building the decision tree and apply this knowledge to classify a new
sample.
ID3 Steps
2. Considering that all rows don’t belong to the same class, split the dataset S into subsets using the feature
3. Make a decision tree node using the feature with the maximum Information gain.
4. If all rows belong to the same class, make the current node as a leaf node with the class as its label.
5. Repeat for the remaining features until we run out of all features, or the decision tree has all leaf nodes.
Entropy(S) = - ∑ pᵢ * log₂(pᵢ) ; i = 1 to n
where,
n is the total number of classes in the target column (in our case n = 2 i.e YES and NO)
pᵢ is the probability of class ‘i’ or the ratio of “number of rows with class i in the target column” to the “total
number of rows” in the dataset.Information Gain for a feature column A is calculated as:
where Sᵥ is the set of rows in S for which the feature column A has value v, |Sᵥ| is the number of rows
Programs:-
import math
import csv
def load_csv(filename):
lines=csv.reader(open(filename,"r"));
dataset = list(lines)
headers = dataset.pop(0)
return dataset,headers
class Node:
def __init__(self,attribute):
self.attribute=attribute
self.children=[]
self.answer=""
def subtables(data,col,delete):
dic={}
coldata=[row[col] for row in data]
attr=list(set(coldata))
counts=[0]*len(attr)
r=len(data)
c=len(data[0])
for x in range(len(attr)):
for y in range(r):
if data[y][col]==attr[x]:
counts[x]+=1
for x in range(len(attr)):
dic[attr[x]]=[[0 for i in range(c)] for j in range(counts[x])]
pos=0
for y in range(r):
if data[y][col]==attr[x]:
if delete:
del data[y][col]
dic[attr[x]][pos]=data[y]
pos+=1
return attr,dic
def entropy(S):
attr=list(set(S))
if len(attr)==1:
return 0
counts=[0,0]
for i in range(2):
counts[i]=sum([1 for x in S if attr[i]==x])/(len(S)*1.0)
sums=0
for cnt in counts:
sums+=-1*cnt*math.log(cnt,2)
return sums
def compute_gain(data,col):
attr,dic = subtables(data,col,delete=False)
total_size=len(data)
entropies=[0]*len(attr)
ratio=[0]*len(attr)
def build_tree(data,features):
lastcol=[row[-1] for row in data]
if(len(set(lastcol)))==1:
node=Node("")
node.answer=lastcol[0]
return node
n=len(data[0])-1
gains=[0]*n
for col in range(n):
gains[col]=compute_gain(data,col)
split=gains.index(max(gains))
node=Node(features[split])
fea = features[:split]+features[split+1:]
attr,dic=subtables(data,split,delete=True)
for x in range(len(attr)):
child=build_tree(dic[attr[x]],fea)
node.children.append((attr[x],child))
return node
def print_tree(node,level):
if node.answer!="":
print(" "*level,node.answer)
return
print(" "*level,node.attribute)
for value,n in node.children:
print(" "*(level+1),value)
print_tree(n,level+2)
def classify(node,x_test,features):
if node.answer!="":
print(node.answer)
return
pos=features.index(node.attribute)
for value, n in node.children:
if x_test[pos]==value:
classify(n,x_test,features)
'''Main program'''
dataset,features=load_csv(r'C:\Users\anil1\Desktop\ML KAI601 Notes\ML Lab\id3.csv')
node1=build_tree(dataset,features)
print("The decision tree for the dataset using ID3 algorithm is")
print_tree(node1,0)
testdata,features=load_csv(r'C:\Users\anil1\Desktop\ML KAI601 Notes\ML Lab\id3_test_1.csv')
Training Data:
Outlook Temperature Humidity Wind PlayTennis
Sunny hot high weak no
Sunny hot high strong no
Overcast hot high weak yes
Rain mild high weak yes
Rain cool normal weak yes
Rain cool normal strong no
Overcast cool normal strong yes
Sunny mild high weak no
Sunny cool normal weak yes
Rain mild normal weak yes
Sunny mild normal strong yes
Overcast mild high strong yes
Overcast hot normal weak yes
Rain mild high strong no
Test Data:
Outlook Temperature Humidity Wind
Rain cool normal strong
Sunny mild normal strong
OUTPUT:
ID3 Tree:
Programs:-
import numpy as np
X = np.array(([2, 9], [1, 5], [3, 6]), dtype=float) # two inputs [sleep,study]
y = np.array(([92], [86], [89]), dtype=float) # one output [Expected % in Exams]
X = X/np.amax(X,axis=0) # maximum of X array longitudinally
y = y/100
#Sigmoid Function
def sigmoid (x):
return 1/(1 + np.exp(-x))
#Variable initialization
epoch=5000 #Setting training iterations
lr=0.1 #Setting learning rate
inputlayer_neurons = 2 #number of features in data set
hiddenlayer_neurons = 3 #number of hidden layers neurons
output_neurons = 1 #number of neurons at output layer
#Forward Propogation
hinp1=np.dot(X,wh)
hinp=hinp1 + bh
hlayer_act = sigmoid(hinp)
outinp1=np.dot(hlayer_act,wout)
outinp= outinp1+ bout
output = sigmoid(outinp)
#Backpropagation
EO = y-output
outgrad = derivatives_sigmoid(output)
d_output = EO* outgrad
EH = d_output.dot(wout.T)
#how much hidden layer weights contributed to error
hiddengrad = derivatives_sigmoid(hlayer_act)
d_hiddenlayer = EH * hiddengrad
Input:
[[0.66666667 1. ]
[0.33333333 0.55555556]
[1. 0.66666667]]
Actual Output:
[[0.92]
[0.86]
[0.89]]
Predicted Output:
[[0.85726252]
[0.83545767]
[0.85611575]]
Experiment-5
Write a program to implement the naïve Bayesian classifier for a sample training data set stored as a
.CSV file. Compute the accuracy of the classifier, considering few test data sets.
Data set:
Examples Pregnancies Glucose BloodPressure SkinThickness Insulin BMI Diabetic Age Outcome
Pedigree
Function
1 6 148 72 35 0 33.6 0.627 50 1
2 1 85 66 29 0 26.6 0.351 31 0
3 8 183 64 0 0 23.3 0.672 32 1
4 1 89 66 23 94 28.1 0.167 21 0
5 0 137 40 35 168 43.1 2.288 33 1
6 5 116 74 0 0 25.6 0.201 30 0
7 3 78 50 32 88 31 0.248 26 1
8 10 115 0 0 0 35.3 0.134 29 0
9 2 197 70 45 543 30.5 0.158 53 1
10 8 125 96 0 0 0 0.232 54 1
Program:
import csv
import random
import math
def load_csv(filename):
lines = csv.reader(open(filename, "r"))
dataset = list(lines)
for i in range(len(dataset)):
# converting strings into numbers for processing
dataset[i] = [float(x) for x in dataset[i]]
return dataset
def separate_by_class(dataset):
separated = {} # dictionary of classes 1 and 0
# creates a dictionary of classes 1 and 0 where the values are the instances belonging to each class
for i in range(len(dataset)):
vector = dataset[i]
if (vector[-1] not in separated):
separated[vector[-1]] = []
separated[vector[-1]].append(vector)
return separated
def mean(numbers):
return sum(numbers) / float(len(numbers))
def stdev(numbers):
avg = mean(numbers)
variance = sum([pow(x - avg, 2) for x in numbers]) / float(len(numbers) - 1)
return math.sqrt(variance)
def summarize(dataset):
# creates a list of tuples [(mean(attribute), stdev(attribute)) for attribute in zip(*dataset)]
summaries = [(mean(attribute), stdev(attribute)) for attribute in zip(*dataset)]
del summaries[-1] # excluding labels +ve or -ve
return summaries
def summarize_by_class(dataset):
separated = separate_by_class(dataset)
summaries = {}
for class_value, instances in separated.items():
summaries[class_value] = summarize(instances) # summarize is used to calculate mean and #std
return summaries
def main():
filename = 'naivedata.csv'
split_ratio = 0.67
dataset = load_csv(filename)
training_set, test_set = split_dataset(dataset, split_ratio)
print('Split {0} rows into train={1} and test={2} rows'.format(len(dataset), len(training_set),
len(test_set)))
# prepare model
summaries = summarize_by_class(training_set)
# test model
predictions = get_predictions(summaries, test_set) # find the predictions of test data with the #training
data
accuracy = get_accuracy(test_set, predictions)
print('Accuracy of the classifier is: {0}%'.format(accuracy))
main()
Output:
Data set:-
Text Documents Label
1 I love this sandwich pos
2 This is an amazing place pos
3 I feel very good about these beers pos
4 This is my best work pos
5 What an awesome view pos
6 I do not like this restaurant neg
7 I am tired of this stuff neg
8 I can't deal with this neg
9 He is my sworn enemy neg
10 My boss is horrible neg
11 This is an awesome place pos
12 I do not like the taste of this juice neg
13 I love to dance pos
14 I am sick and tired of this place neg
15 What a great holiday pos
16 That is a bad locality to stay neg
17 We will have good fun tomorrow pos
18 I went to my enemy's house today neg
Program:-
import pandas as pd
print(X)
print(y)
Output:
The dimensions of the dataset (18, 2)
0 I love this sandwich
1 This is an amazing place
2 I feel very good about these beers
3 This is my best work
4 What an awesome view
5 I do not like this restaurant
6 I am tired of this stuff
7 I can't deal with this
8 He is my sworn enemy
9 My boss is horrible
10 This is an awesome place
11 I do not like the taste of this juice
12 I love to dance
13 I am sick and tired of this place
14 What a great holiday
15 That is a bad locality to stay
16 We will have good fun tomorrow
17 I went to my enemy's house today
Name: message, dtype: object
0 1
1 1
2 1
3 1
4 1
5 0
6 0
7 0
8 0
9 0
10 1
11 0
12 1
13 0
14 1
15 0
16 1
17 0
Name: labelnum, dtype: int64
The total number of Training Data: (13,) The total number of Test Data: (5,)
Data Set:
Attribute Information:
1. age: age in years
2. sex: sex (1 = male; 0 = female)
3. cp: chest pain type
• Value 1: typical angina
• Value 2: atypical angina
• Value 3: non-anginal pain
• Value 4: asymptomatic
4. trestbps: resting blood pressure (in mm Hg on admission to the hospital)
5. chol: serum cholestoral in mg/dl
6. fbs: (fasting blood sugar > 120 mg/dl) (1 = true; 0 = false)
7. restecg: resting electrocardiographic results
• Value 0: normal
• Value 1: having ST-T wave abnormality (T wave inversions and/or ST
elevationor depression of > 0.05 mV)
• Value 2: showing probable or definite left ventricular hypertrophy by
Estes'criteria
8. thalach: maximum heart rate achieved
9. exang: exercise induced angina (1 = yes; 0 = no)
10. oldpeak = ST depression induced by exercise relative to rest
11. slope: the slope of the peak exercise ST segment
• Value 1: upsloping
• Value 2: flat
• Value 3: downsloping
12. thal: 3 = normal; 6 = fixed defect; 7 = reversable defect
13. Heartdisease: It is integer valued from 0 (no presence) to 4.
Some instance from the dataset:
age sex cp trestbps chol fbs restecg thalach exang oldpeak slope ca thal Heartdisease
63 1 1 145 233 1 2 150 0 2.3 3 0 6 0
67 1 4 160 286 0 2 108 1 1.5 2 3 3 2
67 1 4 120 229 0 2 129 1 2.6 2 2 7 1
41 0 2 130 204 0 2 172 0 1.4 1 0 3 0
62 0 4 140 268 0 2 160 0 3.6 3 2 3 3
60 1 4 130 206 0 2 132 1 2.4 2 2 7 4
Program:-
import numpy as np
import pandas as pd
import csv
from pgmpy.estimators import MaximumLikelihoodEstimator
from pgmpy.models import BayesianModel
from pgmpy.inference import VariableElimination
Output:
Experiment-8
Apply EM algorithm to cluster a set of data stored in a .CSV file. Use the same data set for
clustering using k-Means algorithm. Compare the results of these two algorithms and
comment on the quality of clustering. You can add Java/Python ML library classes/API in the
program.
Program:
# Plotting
plt.figure(figsize=(14, 7))
y_gmm = gmm.predict(xs)
Output:-
The accuracy score of K-Mean: 0.09333333333333334
The Confusion matrixof K-Mean: [[ 0 50 0]
[ 2 0 48]
[36 0 14]]
The accuracy score of EM: 0.9666666666666667
The Confusion matrix of EM: [[50 0 0]
[ 0 45 5]
[ 0 0 50]]
Experiment-9
Write a program to implement k-Nearest Neighbour algorithm to classify the iris data set.
Print both correct and wrong predictions. Java/Python ML library classes can be used for this
problem.
Data Set:
Iris Plants Dataset: Dataset contains 150 instances (50 in each of three classes)
Number of Attributes: 4 numeric, predictive attributes and the Class
Program:
Output:
Matrix
[[20 0 0]
[ 0 10 0]
[ 0 1 14]]
Accuracy Metrics
Program:-
import numpy as np
from bokeh.plotting import figure, show, output_notebook
from bokeh.layouts import gridplot
from bokeh.io import push_notebook
# Predict value
return x0 @ beta # @ Matrix Multiplication or Dot Product for prediction
n = 1000
# Generate dataset
X = np.linspace(-3, 3, num=n)
print("The Data Set (10 Samples) X :\n", X[:10])
Y = np.log(np.abs(X ** 2 - 1) + .5)
print("The Fitting Curve Data Set (10 Samples) Y :\n", Y[:10])
# Jitter X
X += np.random.normal(scale=.1, size=n)
print("Normalized (10 Samples) X :\n", X[:10])
domain = np.linspace(-3, 3, num=300)
print("X0 Domain Space (10 Samples) :\n", domain[:10])
# Plotting function
def plot_lwr(tau):
# Prediction through regression
prediction = [local_regression(x0, X, Y, tau) for x0 in domain]
plot = figure(plot_width=400, plot_height=400)
plot.title.text = 'tau=%g' % tau
plot.scatter(X, Y, alpha=.3)
plot.line(domain, prediction, line_width=2, color='red')
return plot
def localWeight(point,xmat,ymat,k):
wei = kernel(point,xmat,k)
W = (X.T*(wei*X)).I*(X.T*(wei*ymat.T)) return W
def localWeightRegression(xmat,ymat,k):
m,n = np1.shape(xmat)
Output: