ML Record
ML Record
COLLEGE OF ENGINEERING
AND TECHNOLOGY
(AUTONOMOUS INSTITUTION)
Coimbatore - 641032
REG.NO :
NAME :
COURSE :
YEAR/SEM:
HINDUSTHAN
COLLEGE OF ENGINEERING AND TECHNOLOGY
(AUTONOMOUS INSTITUTION)
Coimbatore-641032.
DEPARTMENT OF COMPUTER
DEPARTMENT SCIENCE
OF COMPUTER SCIENCE AND ENGINEERING
AND ENGINEERING
Place : Coimbatore
Date:
Register Number:
Submitted for the 22CS5252 / MACHINE LEARNING LABORATORY practical examination
conducted on .
INTERNALEXAMINER EXTERNALEXAMINER
CONTENTS
PAGE
S.NO DATE EXPERIMENT MARKS SIGN
NO
Implementation of Basic Python Libraries
1 a)
(Math, Numpy, Scipy)
Implementation of Python Libraries for
1 b)
Machine Learning Applications (Pandas,
Matplotlib)
1 c) Creation and Loading of Datasets
STAFFINCHARGE
Ex.No: 01 a)
Implementation of Basic Python Libraries (Math, Numpy, Scipy)
Date:
Aim:
Algorithm:
Program:
# Importing libraries
import math
import numpy as np
from scipy import integrate
from scipy import linalg
# Array operations
print("Array after adding 10:", array + 10)
print("Mean of array:", np.mean(array))
print("Standard deviation of array:", np.std(array))
print("Dot product of array with itself:", np.dot(array, array))
Result:
Ex.No: 01 b) Implementation of Python Libraries for Machine Learning
A Applications (Pandas, Matplotlib)
Date:
Aim:
Algorithm:
Program:
# Importing libraries
import pandas as pd
import matplotlib.pyplot as plt
# Creating a DataFrame
df = pd.DataFrame(data)
print("Dataframe:\n", df)
# Data inspection
print("\nBasic Data Information:")
print("Data types:\n", df.dtypes)
print("Summary statistics:\n", df.describe())
# Filtering data
filtered_df = df[df['Age'] > 25]
print("\nFiltered data (Age > 25):\n", filtered_df)
plt.show()
# Plotting Age vs Salary as a scatter plot
plt.figure(figsize=(8, 5))
plt.scatter(df['Age'], df['Salary'], color='green')
plt.xlabel('Age')
plt.ylabel('Salary')
plt.title('Salary vs Age')
plt.show()
Result:
Ex.No:01 c)
Creation and Loading of Datasets
Date:
Aim:
Algorithm:
Program:
# Importing libraries
import numpy as np
import pandas as pd
from sklearn.datasets import make_classification, load_iris
import seaborn as sns
Result:
Ex.No:02
Find-S Algorithm for Hypothesis Selection
Date:
Aim:
Algorithm:
Program:
Let's start by creating a sample CSV file with training data. The CSV file should contain rows with attribute
values and the class label (e.g., "Yes" or "No").
import csv
# Step 1: Read the CSV file and extract the relevant columns
def read_csv(file_path):
data = []
with open(file_path, mode='r') as file:
reader = csv.reader(file)
for row in reader:
data.append(row)
return data
return hypothesis
if __name__ == "__main__":
main()
Output:
Result:
Ex.No:3
Support Vector Machine (SVM) Decision Boundary
Date:
Aim:
Algorithm:
Program:
import numpy as np
import matplotlib.pyplot as plt
from sklearn import datasets
from sklearn.svm import SVC
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
# Step 2: Split the dataset into training and testing sets (80% train, 20% test)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
# Step 4: Train the SVM classifier (use a linear kernel for simplicity)
svm = SVC(kernel='linear', random_state=42)
svm.fit(X_train, y_train)
Result:
Ex.No:4
Decision Tree Classification using ID3 Algorithm
Date:
Aim:
Algorithm:
Program:
# Importing necessary libraries
import pandas as pd
from sklearn.tree import DecisionTreeClassifier
from sklearn import tree
import pandas as pd
# Convert to DataFrame
df = pd.DataFrame(data)
# Encode the categorical variables (Weather, Temperature, PlayTennis)
df_encoded = pd.get_dummies(df)
prediction = clf.predict(new_sample)
print("Predicted class for the new sample:" ,{prediction[0]})
Result:
Ex.No:05
Clustering Using EM (GMM) and k-Means Algorithms
Date:
Aim:
Algorithm:
Program:
import numpy as np
import matplotlib.pyplot as plt
from sklearn.cluster import KMeans
from sklearn.mixture import GaussianMixture
# Optional: print cluster centers for K-Means and GMM component means
print("K-Means Cluster Centers:")
print(kmeans.cluster_centers_)
Result:
Ex.No:06
k-Nearest Neighbor Classification
Date:
Aim:
Algorithm:
Program:
import numpy as np
from sklearn.datasets import load_iris
from sklearn.model_selection import train_test_split
from sklearn.neighbors import KNeighborsClassifier
from sklearn.metrics import accuracy_score
# Step 2: Split the dataset into training and testing sets (80% train, 20% test)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
# Step 3: Initialize and train the k-NN classifier (k=3 in this case)
k=3
knn = KNeighborsClassifier(n_neighbors=k)
knn.fit(X_train, y_train)
for i in range(len(y_test)):
if y_pred[i] == y_test[i]:
correct_predictions.append((X_test[i], y_test[i], y_pred[i]))
else:
incorrect_predictions.append((X_test[i], y_test[i], y_pred[i]))
Result: