0% found this document useful (0 votes)
13 views

Cs3491-Aiml Lab Manual Final(1)

The document is a laboratory manual for the Artificial Intelligence and Machine Learning course at J.P. College of Engineering, outlining the college's vision, mission, and course outcomes. It details the program outcomes, lab instructions, and a list of experiments including implementations of various search algorithms and machine learning models. Additionally, it provides guidelines for behavior in the computer lab and specific coding exercises for students to complete.

Uploaded by

cerani1307
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views

Cs3491-Aiml Lab Manual Final(1)

The document is a laboratory manual for the Artificial Intelligence and Machine Learning course at J.P. College of Engineering, outlining the college's vision, mission, and course outcomes. It details the program outcomes, lab instructions, and a list of experiments including implementations of various search algorithms and machine learning models. Additionally, it provides guidelines for behavior in the computer lab and specific coding exercises for students to complete.

Uploaded by

cerani1307
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 78

J.P.

COLLEGE OF ENGINEERING
College Road, Ayikudi, Tenkasi – 627852
Affiliated to Anna University and Approved by AICTE

DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING

CS349 1-ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING

REGULATION-2021

YEAR/SEM:II/IV

LABORATORY MANUAL
College Vision Mission:
Vision

To evolve as Centre of Excellence in Teaching, Innovative Research and Consultation in


Engineering and Technology and to empower the rural youth with technical knowledge and
professional competence thereby transposing them as globally competitive and self-disciplined
technocrats.

Mission

To inculcate technical knowledge and soft skills among rural students through student-centric
learning process and make them as competent Engineers with professional ethics to face the
global challenges, thus bridging the 'rural-urban divide'.

Department Visio Mission:

VISION: To provide an academically conducive environment for individuals to develop as


technologically superior, socially conscious and nationally responsible citizens.

MISSION:
M1:To develop our department as a center of excellence, imparting quality education,
generating competent and skilled manpower.

M2: We prepare our students with high degree of credibility, integrity, ethical standards and
social concern.

M3: We train our students to devise and implement novel systems, based onEducation and
Research.
COURSE OUTCOMES:
At the end of this course, the students will be able to

CO1 Use appropriate search algorithms for problem solving

CO2 Apply reasoning under uncertainty

CO3 Build supervised learning models

CO4 Build ensembling and unsupervised models

CO5 Build deep learning neural network models


PROGRAM OUTCOMES (POs)

1 Engineering knowledge:Apply the knowledge of mathematics, science, engineering


fundamentals, and an engineering specialization to the solution of complex engineering
problems.

2 Problem analysis : Identify, formulate, review research literature, and analyze complex
engineering problems reaching substantiated conclusions using first principles of
mathematics, natural sciences, and engineering sciences.

3 Design/development of solutions : Design solutions for complex engineering problems


and design system components or processes that meet the specified needs with appropriate
consideration for the public health and safety, and the cultural, societal, and environmental
considerations

4 Conduct investigations of complex problems : Use research-based knowledge


and research methods including design of experiments, analysis and interpretation of data,
and synthesis of the information to provide valid conclusions.

5 Modern tool usage: Create, select, and apply appropriate techniques, resources, and
modern engineering and IT tools including prediction and modeling to complex engineering
activities with an understanding of the limitations.

6 The engineer and society: Apply reasoning informed by the contextual knowledge to
assess societal, health, safety, legal and cultural issues and the consequent
responsibilities relevant to the professional engineering practice.

7 Environment and sustainability: Understand the impact of the professional engineering


solutions in societal and environmental contexts, and demonstrate the knowledge of, and
need for sustainable development.

8 Ethics: Apply ethical principles and commit to professional ethics and responsibilities and
norms of the engineering practice.

9 Individual and team work: Function effectively as an individual, and as a member


or leader in diverse teams, and in multidisciplinary settings.

10 Communication: Communicate effectively on complex engineering activities with the


engineering community and with society at large, such as, being able to comprehend and
write effective reports and design documentation, make effective presentations, and give
and receive clear instructions.

11 Project management and finance: Demonstrate knowledge and understanding of the


engineering and management principles and apply these to one’s own work, as a member
and leader in a team, to manage projects and in multidisciplinary environments.

12 Life-long learning: Recognize the need for, and have the preparation and ability to
engage in independent and life- long learning in the broadest context of technological
change.
INSTRUCTION:

Do's in the Computer Lab:


1. Turn off the machine when you are no longer using it.
2. Report any broken plugs or exposed electrical wires to the teacher immediately.

3.Always SAVE your progress.


4.Always maintain an extra copy of all your data files.
5.Make sure your external devices are VIRUS FREE.
6.Feel free to ask for assistance.
7.Behave properly

Don’ts in the Computer Lab:


1. Do not eat or drink inside the laboratory.

2. Avoid stepping on electrical wires or any other computer cables.

3. Do not open the system unit casing or monitor casing particularly when the power is turned
on (30,000 volts).
4. Do not insert metal objects such as clips, pins, and needles into the computer casings.

5.Do not remove anything from the computer laboratory without permission.
6. Do not touch, connect, or disconnect any plug or cable without permission.
7. Do not touch any circuit boards and power sockets when something is connected to them or
switched one.
8. Do not open an external device without scanning them for computer viruses.

9.Do not change the icons on the computer screen.


10. Do not switch the keyboard letters around.
11. Donot go to programs you don’t know of.

12.Do not install any other programs unless told.

13.Do not unplug anything unless the computer has properly shut down.
14.Do not copy the work of other students.

15. Do not attempt to repair, open, tamper, or interfere with anything inside the lab.
16.Do not plug any other devices.
LIST OF EXPERIMENTS:

1. Implementation of Uninformed search algorithms (BFS, DFS)


2. Implementation of Informed search algorithms (A*, memory-bounded A*)

3. Implement naïve Bayes models

4. Implement Bayesian Networks

5. Build Regression models

6. Build decision trees and random forests

7. Build SVM models

8. Implement ensembling techniques

9. Implement clustering algorithms

10. Implement EM for Bayesian networks

11. Build simple NN models

12. Build deep learning NN models


EX.NO EXERCISES NAME PAGE NO

Uninformed search algorithms


1 (BFS, DFS)
Informed search algorithms
2 (A*, memory-bounded A*)

3 Naïve Bayes models

4 Bayesian Networks

5 Regression models

6 Decision trees and Random forests

7 SVM models

8 Ensembling techniques

9 Clustering algorithms

10 EM for Bayesian networks

11 Simple NN models

12 Deep Learning NN models

CONTENT BEYOND EXPERIMENTS

13 Implement k-Nearest Neighbors(K-NN) for classifcation

14 Build a Convolution Neural Network(CNN) for image


classification
1

Exercise No: 1

UNINFORMED SEARCH ALGORITHMS (BFS, DFS)


Aim

To Implement Uninformed search algorithms (BFS, DFS)

1. Implementation of BFS
Algorithm:
1. Pick any node, visit the adjascent unvisited vertex, mark it as visited, display it, and
insert it in a queue
2. If there are no remaining adjacent vertices left, remove the first vertex from the queue
3. Repeat step1 and step2 until the queue is empty or the desired node is found
Program
from collections import deque
#Define the bfs function
def bfs(graph, start):
visited = set()
queue = deque([start])
visited.add(start)
while queue:
vertex = queue.popleft()
print(str(vertex) + " ", end="")
for neighbor in graph[vertex]:
if neighbor not in visited:
visited.add(neighbor)
queue.append(neighbor)

graph = {
'A': ['B',
'C'],
'B': ['D', 'E'],
'C': ['F'],
'D': [],
'E':
['F'], 'F':
[]
}
bfs(graph, 'A')
2

Output

ABCDEF
3

2. Implementation of DFS
Algorithm :
1. We will start by putting any one of the graph’s vertex on top of the stack
2. After that take the top item of the stack and add it to the visited list of the vertex
3. Next, create a list of that adjacent node of the vertex. Add the ones which are not in the
visited list of vertexes to the top of the stack
4. Lastly, keep repeating steps 2 and 3 until the stack is empty
Program
# Define the graph using an adjacency list
graph = {
'A': ['B', 'C'],
'B': ['D', 'E'],
'C': ['F'],
'D': [],
'E':
['F'], 'F':
[]
}

# Define the DFS function


def dfs(graph, node, visited):
# Mark the current node as visited
visited.append(node)
print(node, end=' ')
# For each neighbor of the current node
for neighbor in graph[node]:
if neighbor not in visited:
# Recursively call the DFS function on the
neighbor dfs(graph, neighbor, visited)

# Call the DFS function on the starting node


visited = []
dfs(graph, 'A', visited)
Output

ABDEFC

Result

Thus uninformed search algorithms (BFS, DFS) are implemented successfully using
python
5

Exercise No: 2

INFORMED SEARCH ALGORITHMS (A*, Memory-bounded A*)


Aim

To Implement of Informed search algorithms (A*, memory-bounded A*)

1. Implementation of A* Search
Algorithm :
1. Add start node to list
2. For all the neighbouring nodes, find the least cost F node
3. Switch to the closed list

a. For 8 nodes adjacent to the current node


b. If the node is not reachable, ignore it. Else
i. If the node is not on the open list, move it to the open list and calculate f,
g, h.
ii. If the node is on the open list, check if the path it offers is less than the
current path and change to it if it does so.
4. Stop working when
a. You find the destination
b. You cannot find the destination going through all possible points

Program:
from collections import deque

class Graph:
def init (self, adjacency_list):
self.adjacency_list =
adjacency_list

def get_neighbors(self, v):


return self.adjacency_list[v]

# heuristic function with equal values for all nodes


def h(self, n):
H ={
'A': 1,
'B': 1,
6

'C': 1,
'D': 1
}

return H[n]

def a_star_algorithm(self, start_node, stop_node):


# open_list is a list of nodes which have been visited, but who's neighbors
# haven't all been inspected, starts off with the start node
# closed_list is a list of nodes which have been visited
# and who's neighbors have been inspected
open_list = set([start_node])
closed_list = set([])

# g contains current distances from start_node to all other nodes


# the default value (if it's not found in the map) is +infinity
g = {}

g[start_node] = 0

# parents contains an adjacency map of all nodes


parents = {}
parents[start_node] = start_node

while len(open_list) > 0:


n = None

# find a node with the lowest value of f() - evaluation function


for v in open_list:
if n == None or g[v] + self.h(v) < g[n] + self.h(n):
n = v;

if n == None:
print('Path does not exist!')
return None

# if the current node is the stop_node


# then we begin reconstructin the path from it to the start_node
if n == stop_node:
reconst_path = []
while parents[n] != n:
reconst_path.append(n)
n = parents[n]
reconst_path.append(start_node)
7

reconst_path.reverse()
print('Path found: {}'.format(reconst_path))
return reconst_path

# for all neighbors of the current node do


for (m, weight) in self.get_neighbors(n):
# if the current node isn't in both open_list and closed_list
# add it to open_list and note n as it's parent
if m not in open_list and m not in closed_list:
open_list.add(m)
parents[m] = n
g[m] = g[n] + weight

# otherwise, check if it's quicker to first visit n, then m


# and if it is, update parent data and g data
# and if the node was in the closed_list, move it to open_list
else:
if g[m] > g[n] + weight:
g[m] = g[n] + weight
parents[m] = n

if m in closed_list:
closed_list.remove(m)
open_list.add(m)

# remove n from the open_list, and add it to closed_list


# because all of his neighbors were inspected
open_list.remove(n)
closed_list.add(n)

print('Path does not exist!')


return None

adjacency_list = {
'A': [('B', 1), ('C', 3), ('D', 7)],
'B': [('D', 5)],
'C': [('D', 12)]
}
graph1 = Graph(adjacency_list)
graph1.a_star_algorithm('A', 'D')
8

Output

Path found: ['A', 'B', 'D']


9

2. Implementation of Memory Bounded A* Search


Algorithm :
1. Initialize the start node and goal node.

2. Initialize the open list with the start node.


3. Initialize the closed list as an empty set.
4. Initialize the maximum memory threshold.
5. Set the current memory usage to zero.

6. Repeat the following steps until the open list is empty or the goal node is found:
a. If the current memory usage exceeds the maximum memory threshold, prune the
open list by removing the node with the highest f-score.
b. Pop the node with the lowest f-score from the open list and add it to the closed list.
c. If the popped node is the goal node, return the path.
d. Generate the successors of the popped node and calculate their f-scores.

e. Add the successors to the open list if they are not already in the closed list and if their
f-scores are lower than the maximum f-score seen so far.
f. Update the maximum f-score seen so far.
g. Update the current memory usage.
7. If the goal node was not found, return failure.
Program:
import heapq

class MemoryBoundedAStar:

def init (self, start_state, goal_state, successors_fn, heuristic_fn, max_memory): self.start_state


= start_state
self.goal_state = goal_state
self.successors_fn = successors_fn
self.heuristic_fn = heuristic_fn
self.max_memory = max_memory

def search(self):
# Initialize the frontier with the starting state
frontier = [(self.heuristic_fn(self.start_state), 0, self.start_state)] #
Initialize the explored set as an empty set
10

explored = set()
# Initialize the maximum memory used as zero
max_memory_used = 0

# Loop until the frontier is empty


while frontier:
# Get the state with the lowest estimated total cost from the frontier
_, total_cost, current_state = heapq.heappop(frontier)
# Check if the current state is the goal state
if current_state == self.goal_state:
return (total_cost, max_memory_used)
# Add the current state to the explored set
explored.add(current_state)
# Get the successors of the current state
successors = self.successors_fn(current_state)
# Loop through the successors
for successor in successors:
# Check if the successor has not been explored
if successor not in explored:
# Calculate the estimated total cost of the successor
successor_cost = total_cost + successor[2] + self.heuristic_fn(successor[0])
# Check if the estimated total cost is within the maximum memory limit
if successor_cost <= self.max_memory:
# Add the successor to the frontier
heapq.heappush(frontier, (successor_cost, total_cost + successor[2], successor[0]))
# Update the maximum memory used
max_memory_used = max(max_memory_used, successor_cost)

# Return failure if the frontier is empty and the goal state has not been found
return "failure"

# Define the start state


start_state = (0, 0)
# Define the goal state
goal_state = (4, 4)

# Define the successors function


def successors_fn(state):
x, y = state
successors = []
if x < 4:
successors.append(((x+1, y), "right", 1))
if y < 4:
successors.append(((x, y+1), "up", 1))
11

if x > 0:
successors.append(((x-1, y), "left", 1))
if y > 0:
successors.append(((x, y-1), "down", 1))
return successors

# Define the heuristic function


def heuristic_fn(state):
x, y = state
return max(abs(x-4), abs(y-4))

# Define the maximum memory


max_memory =10

# Create an instance of the MemoryBoundedAStar class


mbs = MemoryBoundedAStar(start_state, goal_state, successors_fn, heuristic_fn, max_memory)

# Run the search algorithm


result = mbs.search()

# Print the result


print(result)
12

Output

(8, 10)

Result
Thus the Informed search algorithms (A*, memory-bounded A*) are implemented successfully
using python
13

Exercise No: 3

NAÏVE BAYES MODEL


Aim

To implement naïve Bayes models


Algorithm
1. Load the data set
2. Split the dataset into training and testing sets
3. Create a Naive Bayes model for discrete or continuous features
4. Train the model on the training data
5. Calculate the accuracy of the model
6. Deploy the model
Program
1. Using Gaussian Navie Bayes Model
from sklearn.naive_bayes import GaussianNB
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score

# Load the dataset


from sklearn.datasets import load_iris
iris = load_iris()

# Split the dataset into training and testing sets


X_train, X_test, y_train, y_test = train_test_split(iris.data, iris.target, test_size=0.3, random_state
=42)

# Create a Gaussian Naive Bayes


model model = GaussianNB()

# Train the model on the training data


model.fit(X_train, y_train)

# Make predictions on the testing data


y_pred = model.predict(X_test)

# Calculate the accuracy of the model


accuracy = accuracy_score(y_test, y_pred)
print("Accuracy: {:.2f}%".format(accuracy * 100))
14

Output

Accuracy: 97.78%
15

2. Using Multinomial Naive Bayes object


# Import the necessary libraries
from sklearn.naive_bayes import MultinomialNB
from sklearn.feature_extraction.text import CountVectorizer

# Define the training data


training_data = ["This is a positive statement",
"This is a negative statement",
"I am happy with this statement",
"I am not happy with this statement",
"This is a good statement",
"This is not good"]

# Define the target labels


target_labels = ['positive', 'negative', 'positive', 'negative', 'positive', 'negative']

# Create a count vectorizer object


count_vect = CountVectorizer()

# Transform the training data into a bag of words


training_data = count_vect.fit_transform(training_data)

# Create a Multinomial Naive Bayes


object clf = MultinomialNB()

# Train the model using the training data and labels


clf.fit(training_data, target_labels)

# Test the model with some input data


test_data = ["This statement is positive"]
test_data = count_vect.transform(test_data)
print(clf.predict(test_data))
16

Output

['positive']
17

3. Using Bernoulli Naive Bayes classifier


from sklearn.naive_bayes import
BernoulliNB from sklearn.datasets import
load_digits
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score

# Load the digits dataset


digits = load_digits()

# Binarize the data using a threshold


X_bin = (digits.data > 8).astype(int)

# Split the dataset into training and testing sets


X_train, X_test, y_train, y_test = train_test_split(X_bin, digits.target, test_size=0.2)

# Create a Bernoulli Naive Bayes


classifier bnb = BernoulliNB()

# Train the classifier using the training data


bnb.fit(X_train, y_train)

# Predict the classes of the testing data


y_pred = bnb.predict(X_test)

# Print the accuracy of the classifier


print("Accuracy:", accuracy_score(y_test, y_pred))
18

Output

Accuracy: 0.8861111111111111

Result
Thus the naïve bayes models are implemented successfully using python
19

Exercise No: 4

BAYESIAN NETWORKS
Aim

To Implement Bayesian Networks


Algorithm
1. Define the structure of the Bayesian network
2. Define the Conditional probability distributions(CPDs)
3. Add the CPDs to the model
4. Check if the model is valid
5. Initialize the inference engine
6. Make predictions
7. Evaluate the model
Program
# Import the necessary libraries
from pgmpy.models import BayesianNetwork
from pgmpy.factors.discrete import TabularCPD

# Define the model structure


model = BayesianNetwork([('B', 'A'), ('E', 'A'), ('A', 'J'), ('A', 'M')])

# Define the CPDs (Conditional Probability Distributions)


cpd_b = TabularCPD(variable='B', variable_card=2, values=[[0.999],
[0.001]]) cpd_e = TabularCPD(variable='E', variable_card=2,
values=[[0.998], [0.002]]) cpd_a = TabularCPD(variable='A',
variable_card=2,
values=[[0.999, 0.71, 0.06, 0.05],
[0.001, 0.29, 0.94, 0.95]],
evidence=['B', 'E'], evidence_card=[2,
2]) cpd_j = TabularCPD(variable='J',
variable_card=2,
values=[[0.99, 0.05], [0.01, 0.95]],
evidence=['A'], evidence_card=[2])
cpd_m = TabularCPD(variable='M',
variable_card=2,
values=[[0.7, 0.01], [0.3, 0.99]],
evidence=['A'], evidence_card=[2])

# Associate the CPDs with the model structure


model.add_cpds(cpd_b, cpd_e, cpd_a, cpd_j, cpd_m)

# Check if the model is correctly defined


model.check_model()
20

Output

True

Result
Thus the Bayesian network model is implemented successfully using python
21

Exercise No: 5

REGRESSION MODELS
Ai
m
To Build Regression
models
1. LINEAR
REGRESSION
Algorithm
1. Import libraries
2. Generate example data
3. Split data into training and testing sets
4. Create and fit the linear regression model
5. Make predictions on the test data
6. Calculate mean squared error (MSE)
7. Print coefficients and MSE
8. Plot the data and the linear regression line
Program
# Import libraries
import numpy as np
import matplotlib.pyplot as plt
from sklearn.linear_model import LinearRegression
from sklearn.model_selection import train_test_split
from sklearn.metrics import mean_squared_error

# Generate example data


np.random.seed(0)
X = np.random.rand(100, 1) * 10 # Input features
y = 3 * X + np.random.randn(100, 1) * 2 # Target variable with noise

# Split data into training and testing sets


X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=0)

# Create and fit the linear regression model


regression_model = LinearRegression()
regression_model.fit(X_train, y_train)

# Make predictions on the test data y_pred


= regression_model.predict(X_test)

# Calculate mean squared error (MSE)


mse = mean_squared_error(y_test, y_pred)
22

# Print coefficients and MSE


print("Coefficients: ", regression_model.coef_)
print("Intercept: ", regression_model.intercept_)
print("Mean Squared Error (MSE): ", mse)

# Plot the data and the linear regression line


plt.scatter(X_test, y_test, color='blue', label='Actual')
plt.plot(X_test, y_pred, color='red', label='Predicted')
plt.xlabel('X')
plt.ylabel('y')
plt.legend()
plt.show()
23

Output

Coefficients: [[2.9745886]]
Intercept: [0.64471706]
Mean Squared Error (MSE): 4.17373352627807
24

2. LOGISTIC
REGRESSION
Algorithm
1. Generate some example data
2. Split the data into training and testing sets
3. Train the model
4. Make predictions on the test data
5. Calculate accuracy of the model
6. Visualize the decision boundary
Program
import numpy as np
import matplotlib.pyplot as plt
from sklearn.linear_model import LogisticRegressio n
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score
# Create a grid of points to plot the decision boundar y
# Plot the decision boundary

# Generate some example data


np.random.seed(0)
X = np.random.randn(100, 2) # 100 samples, 2 features
y = (X[:, 0] + X[:, 1] > 0).astype(int) # Binary classification target

# Split the data into training and testing sets


X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=0)

# Create a logistic regression model


model = LogisticRegression()

# Train the model


model.fit(X_train,
y_train)

# Make predictions on the test data


y_pred = model.predict(X_test)

# Calculate accuracy of the model


accuracy = accuracy_score(y_test, y_pred)
print("Accuracy:", accuracy)

# Visualize the decision boundary


plt.figure(figsize=(8, 6))
plt.scatter(X[:, 0], X[:, 1], c=y, cmap='bwr', edgecolors='k')
25

plt.xlabel("Feature 1")
plt.ylabel("Feature 2")

# Create a grid of points to plot the decision boundary


xx, yy = np.meshgrid(np.linspace(X[:, 0].min(), X[:, 0].max(), 100),
np.linspace(X[:, 1].min(), X[:, 1].max(),
100)) Z = model.predict(np.c_[xx.ravel(), yy.ravel()])
Z = Z.reshape(xx.shape)

# Plot the decision boundary


plt.contourf(xx, yy, Z, alpha=0.8, cmap='bwr',
levels=1) plt.colorbar()
plt.title("Logistic Regression Decision
Boundary") plt.show()
26

Output

Accuracy: 1.0
27

3. BAYESIAN LINEAR
REGRESSION Algorithm
1. Generate synthetic data
2. Fit Bayesian linear regression
3. Predict using the trained model
4. Plot the data, true line, and predicted line with uncertainty
Program
import numpy as np
import matplotlib.pyplot as plt
from sklearn.linear_model import BayesianRidge

# Generate synthetic data


np.random.seed(0)
X = np.random.rand(40, 1) * 10
y = 2 * X.squeeze() + 1 + np.random.randn(40)

# Fit Bayesian linear


regression regressor =
BayesianRidge()
regressor.fit(X, y)

# Predict using the trained model


X_test = np.linspace(0, 10, 100).reshape(-1, 1)
y_pred, y_std = regressor.predict(X_test, return_std=True)

# Plot the data, true line, and predicted line with uncertainty
plt.scatter(X, y, color='blue', label='Data')
plt.plot(X_test.squeeze(), 2 * X_test.squeeze() + 1, color='green', label='True Line')
plt.plot(X_test.squeeze(), y_pred, color='red', label='Predicted Line')
plt.fill_between(X_test.squeeze(), y_pred -
y_std, y_pred + y_std, color='pink', alpha=0.5, label='Uncertainty')
plt.xlabel('X')
plt.ylabel('y')
plt.legend()
plt.show()
28

Output

Result
Thus the regression models are built successfully using python
29

Exercise No: 6

DECISION TREES AND RANDOM FORESTS


Aim

To build decision trees and random forests

1. BUILD DECISION
TREES Algorithm
1. Collect a dataset
2. clean, normalize and transform the data
3. splitting the dataset into smaller subsets
4. building the decision tree by recursively splitting the dataset into smaller subsets
5. training the data
6. make predictions
7. Evaluate the model
Program
import pandas as pd
from sklearn.model_selection import train_test_split
from sklearn.tree import DecisionTreeClassifier
from sklearn.tree import DecisionTreeRegressor
from sklearn.metrics import accuracy_score, mean_squared_error
from sklearn.datasets import load_iris
iris = load_iris()
X = iris.data
y = iris.target
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
clf = DecisionTreeClassifier()
clf.fit(X_train, y_train)
y_pred = clf.predict(X_test)
accuracy = accuracy_score(y_test, y_pred)
print("Accuracy:", accuracy)
mse = mean_squared_error(y_test, y_pred)
print("Mean Squared Error:", mse)
fromsklearn.tree import plot_tree
import matplotlib.pyplot as plt

plt.figure(figsize=(12,
6)) plot_tree(clf,
filled=True) plt.show()
30

Output

Accuracy: 1.0
Mean Squared Error: 0.0
31

2. BUILD RANDOM
FORESTS Algorithm
1. Collect a dataset
2. clean, normalize and transform the data
3. splitting the dataset into smaller subsets
4. building the decision tree by recursively splitting the dataset into smaller subsets
5. Assemble the decision trees into a forest by training each tree on a different subset of
the training data
6. make predictions
7. Evaluate the model
Random Forests in `scikit-learn` (with N = 100)
Program
import matplotlib.pyplot as plt
from sklearn.datasets import load_iris
from sklearn import tree
import pandas as pd
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import train_test_split

# Load the Dataset


data = load_iris()
df = pd.DataFrame(data.data, columns=data.feature_names)
df['target'] = data.target

# Arrange Data into Features Matrix and Target Vector


X = df.loc[:, df.columns != 'target']
y = df.loc[:, 'target'].values

# Split the data into training and testing sets


X_train, X_test, Y_train, Y_test = train_test_split(X, y, random_state=0)

# Random Forests in `scikit-learn` (with N =


100) rf =
RandomForestClassifier(n_estimators=100,
random_state=0)
rf.fit(X_train, Y_train)
fn=data.feature_names
cn=data.target_names
fig, axes = plt.subplots(nrows = 1,ncols = 1,figsize = (4,4),
dpi=800) tree.plot_tree(rf.estimators_[0],
feature_names = fn,
class_names=cn,
32

filled = True);
fig.savefig('rf_individualtree.png')
# This may not the best way to view each estimator as it is small
fn=data.feature_names
cn=data.target_names
fig, axes = plt.subplots(nrows = 1,ncols = 5,figsize = (10,2),
dpi=900) for index in range(0, 5):
tree.plot_tree(rf.estimators_[index],
feature_names = fn,
class_names=cn,
filled = True,
ax = axes[index]);

axes[index].set_title('Estimator: ' + str(index), fontsize = 11)


fig.savefig('rf_5trees.png')
33

Output
34

Result
Thus the decision trees and random forests are built successfully using python
35

Exercise No: 7

SVM MODELS
Ai
m
To Build SVM
models
Algorithm

1. Gather a dataset that includes both input features and the corresponding output values
that you want to predict
2. Clean, normalize, and transform the data into a format that can be used for building the
SVM model.
3. Divide the dataset into two subsets: one for training the SVM model and the other for
testing the model.
4. Select a kernel function that will be used to map the input features into a higher-
dimensional space where the data can be separated more easily. Common kernel
functions include Linear, Polynomial, Gaussian RBF, and Sigmoid
5. Train the SVM model on the training data
6. Evaluate the performance of the SVM model
7. Build an accuracy of SVM models
Program
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
X = np.array([[19, 19000], [35, 20000], [26, 43000], [27, 57000], [19, 76000], [27, 58000],[27,8400
0],[32,150000],[25,33000],[34,65000],[26,80000],[26,52000],[20,86000],[32,18000],[18,82000], [
29,80000]]) # age, salary
y = np.array([0,0,0,1,1,1,0,1,0,1,0,1,0,1,0,0]) #purchased

from sklearn.model_selection import train_test_split


X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.25, random_state = 0)
from sklearn.preprocessing import StandardScaler
sc = StandardScaler()
X_train =
sc.fit_transform(X_train) X_test =
sc.transform(X_test) from
sklearn.svm import SVC
classifier = SVC(kernel = 'rbf', random_state =
0) classifier.fit(X_train, y_train)
y_pred = classifier.predict(X_test)
from sklearn.metrics import confusion_matrix, accuracy_score
cm = confusion_matrix(y_test, y_pred)
print(cm)
36

accuracy_score(y_test,y_pred)
from matplotlib.colors import ListedColormap
X_set, y_set = X_test, y_test
X1, X2 = np.meshgrid(np.arange(start = X_set[:, 0].min()
- 1, stop = X_set[:, 0].max() + 1, step = 0.01),
np.arange(start = X_set[:, 1].min() - 1, stop = X_set[:, 1].max() + 1, step = 0.01))
plt.contourf(X1, X2, classifier.predict(np.array([X1.ravel(),
X2.ravel()]).T).reshape(X1.shape),
alpha = 0.75, cmap = ListedColormap(('red', 'green')))
plt.xlim(X1.min(), X1.max())
plt.ylim(X2.min(), X2.max())
for i, j in enumerate(np.unique(y_set)):
plt.scatter(X_set[y_set == j, 0], X_set[y_set == j, 1], c = ListedColormap(('red', 'green'))(i), label =
j)
plt.title('SVM (Test set)')
plt.xlabel('Age')
plt.ylabel('Estimated Salary')
plt.legend()
plt.show()
37

Output

[[1 2]
[0 1]]

Result
Thus the SVM models are built successfully using python
38

Exercise No: 8

ENSEMBLING TECHNIQUES
Aim

To implement ensembling techniques

Algorithms
1. Load dataset
2. Assume X is the feature matrix and y is the target variable
3. Split the data into training and testing sets
4. Create voting, stacking, bagging and boosting ensemble classifiers
5. Train the ensemble classifiers
6. Make predictions on test set
7. Calculate accuracy of the ensemble models

Program
# Import libraries
import numpy as np
import pandas as pd
from sklearn.model_selection import train_test_split
from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import BaggingClassifier, AdaBoostClassifier
from sklearn.metrics import accuracy_score
from sklearn.model_selection import cross_val_score
from sklearn.linear_model import LogisticRegression
from sklearn.ensemble import StackingClassifier
from sklearn.ensemble import VotingClassifier
from sklearn.svm import SVC

# Load dataset
# Assume X is the feature matrix and y is the target variable
X = np.array([[19, 19000], [35, 20000], [26, 43000], [27, 57000], [19, 76000], [27, 58000],[27,8400
0],[32,150000],[25,33000],[34,65000],[26,80000],[26,52000],[20,86000],[32,18000],[18,82000], [
29,80000]]) # age, salary
y = np.array([0,0,0,1,1,1,0,1,0,1,0,1,0,1,0,0]) #purchased

# Split the data into training and testing sets


X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)

# Create base classifiers


base_model1 = DecisionTreeClassifier(random_state=42)
39

base_model2 = SVC(random_state=42)
base_model3 = LogisticRegression(random_state=42)

# Create voting ensemble classifier


voting_model = VotingClassifier(estimators=[('dt', base_model1), ('svm', base_model2), ('lr', base
_model3)],
voting='hard')

# Train the voting ensemble classifier


voting_model.fit(X_train, y_train)

# Make predictions on test set


y_pred = voting_model.predict(X_test)

# Calculate accuracy of the ensemble model


accuracy = accuracy_score(y_test, y_pred)
print("Voting Ensemble Accuracy: {:.2f}%".format(accuracy * 100))

# Create a base classifier


base_classifier = DecisionTreeClassifier()

# Bagging Classifier
bagging_classifier = BaggingClassifier(base_classifier, n_estimators=10,
random_state=42) bagging_classifier.fit(X_train, y_train)
bagging_predictions = bagging_classifier.predict(X_test)
bagging_accuracy = accuracy_score(y_test, bagging_predictions)
print("Bagging Classifier Accuracy:", bagging_accuracy)

# AdaBoost Classifier
adaboost_classifier = AdaBoostClassifier(base_classifier, n_estimators=10, random_state=42)
adaboost_classifier.fit(X_train, y_train)
adaboost_predictions = adaboost_classifier.predict(X_test)
adaboost_accuracy = accuracy_score(y_test, adaboost_predictions)
print("AdaBoost Classifier Accuracy:", adaboost_accuracy)

# Stacking Classifier
# Create the base classifiers
base_classifier_1 =
DecisionTreeClassifier() base_classifier_2
= LogisticRegression()

# Create the meta-classifier


meta_classifier =
LogisticRegression()

# Create the stacking classifier


40

stacking_classifier = StackingClassifier(estimators=[('classifier1', base_classifier_1), ('classifier2',


b ase_classifier_2)])
stacking_classifier.fit(X_train, y_train)
stacking_predictions =
stacking_classifier.predict(X_test)
stacking_accuracy = accuracy_score(y_test, stacking_predictions)
print("Stacking Classifier Accuracy:", stacking_accuracy)

# Evaluate classifiers using cross-validation


cv_scores_bagging = cross_val_score(bagging_classifier, X, y, cv=5)
cv_scores_adaboost = cross_val_score(adaboost_classifier, X, y,
cv=5) cv_scores_stacking = cross_val_score(stacking_classifier, X, y,
cv=5) print("Cross-Validation Scores (Bagging):",
cv_scores_bagging) print("Cross-Validation Scores (AdaBoost):",
cv_scores_adaboost) print("Cross-Validation Scores (Stacking):",
cv_scores_stacking)
41

Output

Voting Ensemble Accuracy: 50.00%


Bagging Classifier Accuracy: 0.75
AdaBoost Classifier Accuracy: 0.75
Stacking Classifier Accuracy: 0.75
Cross-Validation Scores (Bagging): 0.66666667 0.66666667 1. 0.33333333]
[0.5
Cross-Validation Scores (AdaBoost): [0.5 0.66666667 0.66666667 1. 0.33333333]
Cross-Validation Scores (Stacking): [0.5 0.66666667 0.66666667 1. 0.33333333]

Result
Thus the ensembling techniques are implemented successfully using python
42

Exercise No: 9

CLUSTERING ALGORITHMS
Aim

To implement clustering algorithms

1. GAUSSIAN MIXURE
MODELAlgorithm
1. Load the iris dataset
2. Select first two columns
3. Turn it into a dataframe
4. Plot the data
5. Fit the gmm model for the dataset which expresses the dataset as a mixture of 3
gaussian distribution
6. Assign a label to each sample
7. Plot three clusters in same plot
8. Print the converged log-likelihood value
9. Print the number of iterations needed for the log-likelihood value to converge
Program
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from pandas import DataFrame
from sklearn import datasets
from sklearn.mixture import GaussianMixture

# load the iris dataset


iris = datasets.load_iris()

# select first two columns


X = iris.data[:, :2]

# turn it into a dataframe


d = pd.DataFrame(X)

# plot the data


plt.scatter(d[0], d[1])
gmm = GaussianMixture(n_components = 3)

# Fit the GMM model for the dataset


43

# which expresses the dataset as a #


mixture of 3 Gaussian Distribution
gmm.fit(d)

# Assign a label to each sample


labels = gmm.predict(d)
d['labels']= labels
d0 = d[d['labels']==
0] d1 = d[d['labels']==
1] d2 = d[d['labels']==
2]

# plot three clusters in same plot


plt.scatter(d0[0], d0[1], c ='r')
plt.scatter(d1[0], d1[1], c ='yellow')
plt.scatter(d2[0], d2[1], c ='g')
# print the converged log-likelihood value
print(gmm.lower_bound_)

# print the number of iterations needed #


for the log-likelihood value to converge
print(gmm.n_iter_)
44

Output

-1.4987505566235166
8
45

2. K MEANS CLUSTERING
ALGORITHM Algorithm
1. Generate some sample data
2. Initialize the KMeans object
3. Fit the K-means model to the data
4. Access the cluster centers
5. Print the cluster centers
6. Plot the original data points and the predicted labels
Program
import numpy as np
from sklearn.cluster import KMeans

# Generate some sample data


np.random.seed(0)
X = np.concatenate([np.random.normal(0, 1, size=(100, 2)),
np.random.normal(5, 0.5, size=(100, 2))])

# Initialize the KMeans object


kmeans =
KMeans(n_clusters=2)

# Fit the K-means model to the data


kmeans.fit(X)

# Predict the labels of the data points


labels = kmeans.labels_

# Access the cluster centers centers


= kmeans.cluster_centers_

# Print the cluster centers


print("Cluster centers: ", centers)

# Plot the original data points and the predicted labels


import matplotlib.pyplot as plt

plt.scatter(X[:, 0], X[:, 1], c=labels, cmap='viridis')


plt.scatter(centers[:, 0], centers[:, 1], c='red', marker='o')
plt.xlabel('X1')
plt.ylabel('X2')
plt.title('K-means Clustering')
plt.show()
46

Output
Cluster centers: [[ 4.92846083e+00 4.94352468e+00]
[-9.57681805e-04 1.42778668e-01]]
47

3. KNN
ALGORITHMS
Algorithm

1. Generate a dataset
2. Instantiate a KNN classifier
3. Train the KNN model
4. Predict the labels for the test data
5. Calculate the accuracy of the model
6. Plot the training and test data
Program
import numpy as np
import matplotlib.pyplot as plt
from sklearn.datasets import make_blobs
from sklearn.model_selection import train_test_split
from sklearn.neighbors import KNeighborsClassifier
from sklearn.metrics import accuracy_score
# Generate a toy dataset
X, y = make_blobs(n_samples=200, centers=3, random_state=42)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
# Instantiate a KNN classifier
k = 5 # Number of neighbors to consider
knn =
KNeighborsClassifier(n_neighbors=k) #
Train the KNN model
knn.fit(X_train, y_train)
# Predict the labels for the test data
y_pred = knn.predict(X_test)
# Calculate the accuracy of the model
accuracy = accuracy_score(y_test, y_pred)
print("Accuracy:", accuracy)
# Plot the training and test data
plt.scatter(X_train[:, 0], X_train[:, 1], c=y_train, cmap='viridis', label='Training Data')
plt.scatter(X_test[:, 0], X_test[:, 1], c=y_test, cmap='viridis', marker='x', label='Test Data')
plt.xlabel('Feature 1')
plt.ylabel('Feature 2')
plt.title('KNN
Classification') plt.legend()
plt.show()
48

Output

Accuracy: 1.0
49

4. EXPECTATION MAXIMIZATION
ALGORITHM Algorithm
1. Start with initial estimates of the model parameters
2. calculate the expected value of the latent variables (hidden variables)
3. Iterate between the E-step and M-step until the estimated model parameters converge to
a fixed point
4. Stop the algorithm when the convergence criterion is when a maximum number of
iterations has been reached.
Program
import numpy as np

def gaussian(x, mu, sigma):


return 1 / (np.sqrt(2 * np.pi) * sigma) * np.exp(-(x - mu) ** 2 / (2 * sigma ** 2))

def expectation_maximization(data, num_clusters, num_iterations): #


Initialize the means and variances of the Gaussian distributions
means = np.random.randn(num_clusters)
variances = np.random.rand(num_clusters)

for iteration in range(num_iterations): #


E-step: compute the responsibilities
responsibilities = np.zeros((len(data), num_clusters))
for i in range(len(data)):
for j in range(num_clusters):
responsibilities[i, j] = gaussian(data[i], means[j],
np.sqrt(variances[j])) responsibilities /= np.sum(responsibilities, axis=1,
keepdims=True)

# M-step: update the means and variances


for j in range(num_clusters):
means[j] = np.sum(responsibilities[:, j] * data) / np.sum(responsibilities[:, j])
variances[j] = np.sum(responsibilities[:, j] * (data -
means[j]) ** 2) / np.sum(responsibilities[:, j])

return means, variances


data = np.random.randn(100)
num_clusters = 2
num_iterations = 10

means, variances = expectation_maximization(data, num_clusters, num_iterations)

print("Means:", means)
50

print("Variances:", variances)
51

Output

Means: [ 0.03104356 -0.17020411]


Variances: [1.33475242 0.48220106]

Result
Thus the clustering algorithms are implemented successfully using python
52

Exercise No: 10

EM FOR BAYESIAN NETWORK


Ai
m
To Implement EM for Bayesian networks

Algorithm
1. Define the prior probabilities
2. Define the observed data
3. Initialize the model parameters
4. Run the EM Algorithm for 10 iterations
5. E-step: Compute the expected value
6. M-step: Update the model parameters based on the expected value
7. Print the final model parameters

Program

import numpy as np

# Define the prior probabilities


prior_burglary = 0.001
prior_not_burglary = 1 - prior_burglary
prior_alarm_given_burglary = 0.99
prior_alarm_given_not_burglary = 0.01

# Define the observed data (the alarm went off)


observed_alarm = True

# Initialize the model parameters


burglary = prior_burglary
not_burglary = prior_not_burglary
alarm_given_burglary = prior_alarm_given_burglary
alarm_given_not_burglary = prior_alarm_given_not_burglary

# Run the EM algorithm for 10 iterations


for i in range(10):
# E-step: Compute the expected value of burglary given the observed alarm
p_burglary_given_alarm = (alarm_given_burglary * burglary) / ((alarm_given_burglary *
burglar
y) + (alarm_given_not_burglary * not_burglary))

# M-step: Update the model parameters based on the expected value of burglary
burglary = np.mean(p_burglary_given_alarm)
53

not_burglary = 1 - burglary
alarm_given_burglary = np.sum(p_burglary_given_alarm) / np.sum(burglary)
alarm_given_not_burglary = np.sum(1 - p_burglary_given_alarm) / np.sum(not_burglary)

# Print the final model parameters


print("P(Burglary) =
{:.4f}".format(burglary))
print("P(Not Burglary) = {:.4f}".format(not_burglary))
print("P(Alarm | Burglary) =
{:.4f}".format(alarm_given_burglary))
print("P(Alarm | Not Burglary) = {:.4f}".format(alarm_given_not_burglary))
54

Output

P(Burglary) = 0.0902
P(Not Burglary) =
0.9098
P(Alarm | Burglary) = 1.0000
P(Alarm | Not Burglary) =
1.0000
55

Result
Thus the EM for Bayesian networks are implemented successfully using python
56

Exercise No: 11

SIMPLE NN MODELS
Ai
m
To build simple NN
models
Algorithm
1. Define the problem
2. Gather and prepare data
3. Decide on the architecture of your neural network. This includes the number of layers,
the number of neurons in each layer, and the activation functions (ReLU) used.
4. Initialize the weights of the neural network randomly.
5. Forward propagation: Pass the input data through the neural network, and calculate the
output of the network using the weights.
6. Calculate the error: Calculate the difference between the predicted output of the neural
network and the actual output.
7. Backpropagation: Propagate the error back through the network, adjusting the weights to
minimize the error.
8. Repeat steps 5-7 with the same data until the error is minimized.
9. Validate and test the network to make accurate predictions.
Program
from keras.models import Sequential
from keras.layers import Dense
import numpy as np
X = np.array([[0, 0], [0, 1], [1, 0], [1, 1]])
y = np.array([[0], [1], [1], [0]])
model = Sequential()
model.add(Dense(4, input_dim=2, activation='relu'))
model.add(Dense(1, activation='sigmoid'))
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
model.fit(X, y,epochs=1000,batch_size=4)
print(model.predict(np.array([[0, 0], [0, 1], [1, 0], [1, 1]])))
57

Output

Epoch 231/1000
1/1 [==============================] - 0s 10ms/step - loss: 0.6894 - accuracy: 0.5000
Epoch 232/1000
1/1 [==============================] - 0s 11ms/step - loss: 0.6892 - accuracy: 0.5000
Epoch 233/1000
1/1 [==============================] - 0s 9ms/step - loss: 0.6891 - accuracy: 0.5000
Epoch 594/1000
1/1 [==============================] - 0s 8ms/step - loss: 0.5975 - accuracy: 0.7500
Epoch 595/1000
1/1 [==============================] - 0s 8ms/step - loss: 0.5971 - accuracy: 0.7500
.
.
.
.
Epoch 996/1000
1/1 [==============================] - 0s 10ms/step - loss: 0.4194 - accuracy: 1.0000
Epoch 997/1000
1/1 [==============================] - 0s 10ms/step - loss: 0.4190 - accuracy: 1.0000
Epoch 998/1000
1/1 [==============================] - 0s 10ms/step - loss: 0.4186 - accuracy: 1.0000
Epoch 999/1000
1/1 [==============================] - 0s 9ms/step - loss: 0.4181 - accuracy: 1.0000
Epoch 1000/1000
1/1 [==============================] - 0s 9ms/step - loss: 0.4177 - accuracy:
1.0000 [[0.4279034 ]
[0.6979224 ]
[0.6604798 ]
[0.28546384]]

Result
Thus the simple NN models are built successfully using python
58

Exercise No: 12

DEEP LEARNING NN MODELS


Aim

To build deep learning NN models

1. MULTILAYER PERCEP TRON


(MLP) Algorithm
1. Define the problem
2. Gather and prepare data
3. Decide on the architecture of your neural network. This includes the number of layers,
the number of neurons in each layer, and the activation functions (ReLU) used.
4. Initialize the weights of the neural network randomly.
5. Forward propagation: Pass the input data through the neural network, and calculate the
output of the network using the weights.
6. Calculate the error: Calculate the difference between the predicted output of the neural
network and the actual output.
7. Backpropagation: Propagate the error back through the network, adjusting the weights to
minimize the error.
8. Repeat steps 5-7 with the same data until the error is minimized.
9. Validate and test the network to make accurate predictions.
Program
from keras.models import Sequential
from keras.layers import Dense
import numpy as np
X = np.array([[0, 0], [0, 1], [1, 0], [1, 1]])
y = np.array([0, 1, 1,
0]) model =
Sequential()
model.add(Dense(8, input_dim=2, activation='relu'))
model.add(Dense(4, activation='relu'))
model.add(Dense(1, activation='sigmoid'))

model.compile(loss='binary_crossentropy',
optimizer='adam',
metrics=['accuracy'])
model.fit(X, y,
epochs=1000,
batch_size=4,
verbose=1)
print(model.predict(np.array([[0, 0], [0, 1], [1, 0], [1, 1]])))
59

Output

Epoch 86/1000
1/1 [==============================] - 0s 10ms/step - loss: 0.6745 - accuracy: 0.5000
Epoch 87/1000
1/1 [==============================] - 0s 13ms/step - loss: 0.6739 - accuracy: 0.5000
Epoch 88/1000
1/1 [==============================] - 0s 12ms/step - loss: 0.6732 - accuracy: 0.5000
Epoch 89/1000
1/1 [==============================] - 0s 10ms/step - loss: 0.6726 - accuracy: 0.5000
Epoch 90/1000
1/1 [==============================] - 0s 14ms/step - loss: 0.6720 - accuracy: 0.5000.
.
.
Epoch 234/1000
1/1 [==============================] - 0s 7ms/step - loss: 0.5759 - accuracy: 0.7500
Epoch 235/1000
1/1 [==============================] - 0s 8ms/step - loss: 0.5751 - accuracy: 0.7500
Epoch 236/1000
1/1 [==============================] - 0s 9ms/step - loss: 0.5742 - accuracy: 0.7500
Epoch 237/1000
1/1 [==============================] - 0s 9ms/step - loss: 0.5734 - accuracy: 0.7500
.
.
.
Epoch 996/1000
1/1 [==============================] - 0s 9ms/step - loss: 0.0422 - accuracy: 1.0000
Epoch 997/1000
1/1 [==============================] - 0s 9ms/step - loss: 0.0421 - accuracy: 1.0000
Epoch 998/1000
1/1 [==============================] - 0s 9ms/step - loss: 0.0419 - accuracy: 1.0000
Epoch 999/1000
1/1 [==============================] - 0s 9ms/step - loss: 0.0418 - accuracy: 1.0000
Epoch 1000/1000
1/1 [==============================] - 0s 9ms/step - loss: 0.0417 - accuracy: 1.0000
1/1 [==============================] - 0s
89ms/step [[0.09045944]
[0.9738202 ]
[0.97509986]
[0.01960859]]
60

2. CONVOLUTIONAL NEURAL
NETWORK (CNN) Algorithm
1. Set random seed for reproducibility
2. Define the input shape of our images
3. Define the input shape of our images
4. Define the input shape of our images
5. Define the input shape of our images
6. Add the first convolutional layer with 32 filters, a 3x3 kernel, and ReLU
activation function
7. Add the first max pooling layer with a 2x2 pool size
8. Add the second convolutional layer with 64 filters and a 3x3 kernel
9. Add the second max pooling layer with a 2x2 pool size
10. Flatten the output from the convolutional layers
11. Add a fully connected layer with 128 units and a ReLU activation function
12. Compile the model with categorical cross-entropy loss function and Adam optimizer
13. Print the model summary
Program
import numpy as np
from keras.models import Sequential
from keras.layers import Conv2D, MaxPooling2D, Flatten, Dense

# Set random seed for reproducibility


np.random.seed(42)

# Define the input shape of our images input_shape


= (28, 28, 1) # 28x28 grayscale images

# Define the number of classes we want to classify


num_classes = 10 # digits 0-9

# Define the model architecture


model = Sequential()

# Add the first convolutional layer with 32 filters, a 3x3 kernel, and ReLU activation function
model.add(Conv2D(32, kernel_size=(3, 3), activation='relu', input_shape=input_shape))

# Add the first max pooling layer with a 2x2 pool


size model.add(MaxPooling2D(pool_size=(2, 2)))

# Add the second convolutional layer with 64 filters and a 3x3 kernel
model.add(Conv2D(64, kernel_size=(3, 3), activation='relu'))
61

# Add the second max pooling layer with a 2x2 pool


size model.add(MaxPooling2D(pool_size=(2, 2)))

# Flatten the output from the convolutional layers


model.add(Flatten())

# Add a fully connected layer with 128 units and a ReLU activation function
model.add(Dense(128, activation='relu'))

# Add the output layer with num_classes units and a softmax activation function
model.add(Dense(num_classes, activation='softmax'))

# Compile the model with categorical cross-entropy loss function and Adam optimizer
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])

# Print the model summary


model.summary()
62

Output

Model: "sequential_8"
_
Layer (type) Output Shape Param #
=================================================================
conv2d (Conv2D) (None, 26, 26, 32) 320

max_pooling2d (MaxPooling2D (None, 13, 13, 32) 0


)

conv2d_1 (Conv2D) (None, 11, 11, 64) 18496

max_pooling2d_1 (MaxPooling (None, 5, 5, 64) 0


2D)

flatten (Flatten) (None, 1600) 0

dense_18 (Dense) (None, 128) 204928

dense_19 (Dense) (None, 10) 1290

=================================================================
Total params: 225,034
Trainable params: 225,034
Non-trainable params: 0
63

3. RECURRENT NEURAL
NETWORK (RNN) Algorithm
1. Set random seed for reproducibility
2. Define input sequence
3. Define output sequence
4. Define model architecture
5. Compile model
6. Train model
7. Test model
Program
# Import necessary libraries
import numpy as np
from keras.models import Sequential
from keras.layers import Dense, SimpleRNN

# Set random seed for reproducibility


np.random.seed(42)

# Define input sequence


input_sequence = np.array([[0, 1, 0], [1, 0, 1], [0, 1, 1], [1, 1, 0], [1, 0, 0]])

# Define output sequence


output_sequence = np.array([[1], [0], [1], [1], [0]])

# Define model architecture


model = Sequential()
model.add(SimpleRNN(units=4, activation='sigmoid', input_shape=(3,
1))) model.add(Dense(units=1, activation='sigmoid'))

# Compile model
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])

# Train model
model.fit(input_sequence.reshape(5, 3, 1), output_sequence, epochs=50, verbose=2)

# Test model
test_sequence = np.array([[0, 1, 1], [1, 1, 1], [0, 0, 1]])
predictions = model.predict(test_sequence.reshape(3, 3, 1))
model.summary()
64

Output

Epoch 1/50
1/1 - 2s - loss: 0.6728 - accuracy: 0.6000 - 2s/epoch - 2s/step
Epoch 2/50
1/1 - 0s - loss: 0.6723 - accuracy: 0.6000 - 9ms/epoch - 9ms/step
Epoch 3/50
1/1 - 0s - loss: 0.6718 - accuracy: 0.6000 - 8ms/epoch - 8ms/step
Epoch 4/50
1/1 - 0s - loss: 0.6713 - accuracy: 0.6000 - 9ms/epoch - 9ms/step
Epoch 5/50
1/1 - 0s - loss: 0.6708 - accuracy: 0.6000 - 8ms/epoch - 8ms/step
Epoch 6/50
1/1 - 0s - loss: 0.6703 - accuracy: 0.6000 - 9ms/epoch - 9ms/step
.
.
1/1 - 0s - loss: 0.6539 - accuracy: 0.6000 - 13ms/epoch - 13ms/step
Epoch 48/50
1/1 - 0s - loss: 0.6536 - accuracy: 0.6000 - 9ms/epoch - 9ms/step
Epoch 49/50
1/1 - 0s - loss: 0.6533 - accuracy: 0.6000 - 8ms/epoch - 8ms/step
Epoch 50/50
1/1 - 0s - loss: 0.6530 - accuracy: 0.6000 - 8ms/epoch - 8ms/step
1/1 [==============================] - 0s 179ms/step
Model: "sequential_9"
_
Layer (type) Output Shape Param #
=================================================================
simple_rnn (SimpleRNN) (None, 4) 24

dense_20 (Dense) (None, 1) 5

=================================================================
Total params: 29
Trainable params: 29
Non-trainable params: 0

Result
Thus the deep learning NN models are built successfully using python
Exercise:13
Implement k-Nearest Neighbors(K-NN) for classification

Aim:
To Implement k-Nearest Neighbors(K-NN) for classification
Alogrithm:

1. Import Libraries: You'll need libraries like NumPy and optionally Matplotlib for
visualization.
2. Prepare the Dataset: Use a dataset for testing. The Iris dataset is a common choice.
3. Distance Calculation: Implement a function to calculate the distance between points.
4. Finding Neighbors: Write a function to find the k-nearest neighbors.
5. Prediction: Implement a function to predict the class of a new point based on its
neighbors.

Program:
import numpy as np
from collections import Counter
from sklearn.datasets import load_iris
from sklearn.model_selection import train_test_split

class KNN:
def __init__(self, k=3):
self.k = k

def fit(self, X, y):


self.X_train = X
self.y_train = y

def predict(self, X):


predictions = [self._predict(x) for x in X]
return np.array(predictions)

def _predict(self, x):


# Compute distances between x and all examples in the training set
distances = np.linalg.norm(self.X_train - x, axis=1)

# Sort by distance and return indices of the first k neighbors


k_indices = np.argsort(distances)[:self.k]

# Extract the labels of the k nearest neighbor


k_nearest_labels = [self.y_train[i] for i in k_indices]

# Return the most common class label among the k neighbors


most_common = Counter(k_nearest_labels).most_common(1)
return most_common[0][0]

# Load Iris dataset


iris = load_iris()
X = iris.data
y = iris.target

# Split the dataset into training and testing sets


X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)

# Create and fit the K-NN model


k=3
model = KNN(k=k)
model.fit(X_train, y_train)

# Make predictions
predictions = model.predict(X_test)

# Calculate accuracy
accuracy = np.mean(predictions == y_test)
print(f'Predictions: {predictions}')
print(f'True Labels: {y_test}')
print(f'Accuracy: {accuracy:.2f}')

Output:
Predictions: [0 2 0 1 0]
True Labels: [0 2 0 1 0]
Accuracy: 1.00

Result:
Thus k-Nearest Neighbors(K-NN) are implemented successfully using python
Exericse:14
Build a Convolution Neural Network(CNN) for image classification
Aim:
To implement Build a Convolution Neural Network(CNN) for image classification
Alogrithm:
1 Install Required Libraries
2 Import Necessary Libraries
3 Load and Preprocess the Dataset
4 Build the CNN Model
5Compile the Model
6Train the Model
7 Evaluate the Model
8 Visualize Training History
9 Make Predictions
Program:
import tensorflow as tf
from tensorflow.keras import layers, models
from tensorflow.keras.datasets import cifar10
import matplotlib.pyplot as plt
# Load CIFAR-10 dataset
(x_train, y_train), (x_test, y_test) = cifar10.load_data()

# Normalize pixel values to be between 0 and 1


x_train = x_train.astype('float32') / 255.0
x_test = x_test.astype('float32') / 255.0

# Convert labels to categorical one-hot encoding


y_train = tf.keras.utils.to_categorical(y_train, 10)
y_test = tf.keras.utils.to_categorical(y_test, 10)
model = models.Sequential()

# First convolutional block


model.add(layers.Conv2D(32, (3, 3), activation='relu', input_shape=(32, 32, 3)))
model.add(layers.MaxPooling2D((2, 2)))

# Second convolutional block


model.add(layers.Conv2D(64, (3, 3), activation='relu'))
model.add(layers.MaxPooling2D((2, 2)))

# Third convolutional block


model.add(layers.Conv2D(64, (3, 3), activation='relu'))

# Flatten the output


model.add(layers.Flatten())

# Fully connected layers


model.add(layers.Dense(64, activation='relu'))
model.add(layers.Dense(10, activation='softmax')) # 10 classes for CIFAR-10
model.compile(optimizer='adam',
loss='categorical_crossentropy',
metrics=['accuracy'])
history = model.fit(x_train, y_train, epochs=10, batch_size=64, validation_data=(x_test,
y_test))
test_loss, test_acc = model.evaluate(x_test, y_test)
print(f'Test accuracy: {test_acc}')
# Plot training & validation accuracy values
plt.plot(history.history['accuracy'], label='Train Accuracy')
plt.plot(history.history['val_accuracy'], label='Validation Accuracy')
plt.title('Model Accuracy')
plt.ylabel('Accuracy')
plt.xlabel('Epoch')
plt.legend()
plt.show()

# Plot training & validation loss values


plt.plot(history.history['loss'], label='Train Loss')
plt.plot(history.history['val_loss'], label='Validation Loss')
plt.title('Model Loss')
plt.ylabel('Loss')
plt.xlabel('Epoch')
plt.legend()
plt.show()
# Make predictions on the test data
predictions = model.predict(x_test)

# Show a sample prediction


import numpy as np

# Display the first test image and its predicted label


plt.imshow(x_test[0])
plt.title(f'Predicted label: {np.argmax(predictions[0])}')
plt.axis('off')
plt.show()
Output:
Test accuracy: 0.7500

Result:
Thus Convolution Neural Network(CNN) for image classification implemented
successfully using python

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy