Cs3491-Aiml Lab Manual Final(1)
Cs3491-Aiml Lab Manual Final(1)
COLLEGE OF ENGINEERING
College Road, Ayikudi, Tenkasi – 627852
Affiliated to Anna University and Approved by AICTE
REGULATION-2021
YEAR/SEM:II/IV
LABORATORY MANUAL
College Vision Mission:
Vision
Mission
To inculcate technical knowledge and soft skills among rural students through student-centric
learning process and make them as competent Engineers with professional ethics to face the
global challenges, thus bridging the 'rural-urban divide'.
MISSION:
M1:To develop our department as a center of excellence, imparting quality education,
generating competent and skilled manpower.
M2: We prepare our students with high degree of credibility, integrity, ethical standards and
social concern.
M3: We train our students to devise and implement novel systems, based onEducation and
Research.
COURSE OUTCOMES:
At the end of this course, the students will be able to
2 Problem analysis : Identify, formulate, review research literature, and analyze complex
engineering problems reaching substantiated conclusions using first principles of
mathematics, natural sciences, and engineering sciences.
5 Modern tool usage: Create, select, and apply appropriate techniques, resources, and
modern engineering and IT tools including prediction and modeling to complex engineering
activities with an understanding of the limitations.
6 The engineer and society: Apply reasoning informed by the contextual knowledge to
assess societal, health, safety, legal and cultural issues and the consequent
responsibilities relevant to the professional engineering practice.
8 Ethics: Apply ethical principles and commit to professional ethics and responsibilities and
norms of the engineering practice.
12 Life-long learning: Recognize the need for, and have the preparation and ability to
engage in independent and life- long learning in the broadest context of technological
change.
INSTRUCTION:
3. Do not open the system unit casing or monitor casing particularly when the power is turned
on (30,000 volts).
4. Do not insert metal objects such as clips, pins, and needles into the computer casings.
5.Do not remove anything from the computer laboratory without permission.
6. Do not touch, connect, or disconnect any plug or cable without permission.
7. Do not touch any circuit boards and power sockets when something is connected to them or
switched one.
8. Do not open an external device without scanning them for computer viruses.
13.Do not unplug anything unless the computer has properly shut down.
14.Do not copy the work of other students.
15. Do not attempt to repair, open, tamper, or interfere with anything inside the lab.
16.Do not plug any other devices.
LIST OF EXPERIMENTS:
4 Bayesian Networks
5 Regression models
7 SVM models
8 Ensembling techniques
9 Clustering algorithms
11 Simple NN models
Exercise No: 1
1. Implementation of BFS
Algorithm:
1. Pick any node, visit the adjascent unvisited vertex, mark it as visited, display it, and
insert it in a queue
2. If there are no remaining adjacent vertices left, remove the first vertex from the queue
3. Repeat step1 and step2 until the queue is empty or the desired node is found
Program
from collections import deque
#Define the bfs function
def bfs(graph, start):
visited = set()
queue = deque([start])
visited.add(start)
while queue:
vertex = queue.popleft()
print(str(vertex) + " ", end="")
for neighbor in graph[vertex]:
if neighbor not in visited:
visited.add(neighbor)
queue.append(neighbor)
graph = {
'A': ['B',
'C'],
'B': ['D', 'E'],
'C': ['F'],
'D': [],
'E':
['F'], 'F':
[]
}
bfs(graph, 'A')
2
Output
ABCDEF
3
2. Implementation of DFS
Algorithm :
1. We will start by putting any one of the graph’s vertex on top of the stack
2. After that take the top item of the stack and add it to the visited list of the vertex
3. Next, create a list of that adjacent node of the vertex. Add the ones which are not in the
visited list of vertexes to the top of the stack
4. Lastly, keep repeating steps 2 and 3 until the stack is empty
Program
# Define the graph using an adjacency list
graph = {
'A': ['B', 'C'],
'B': ['D', 'E'],
'C': ['F'],
'D': [],
'E':
['F'], 'F':
[]
}
ABDEFC
Result
Thus uninformed search algorithms (BFS, DFS) are implemented successfully using
python
5
Exercise No: 2
1. Implementation of A* Search
Algorithm :
1. Add start node to list
2. For all the neighbouring nodes, find the least cost F node
3. Switch to the closed list
Program:
from collections import deque
class Graph:
def init (self, adjacency_list):
self.adjacency_list =
adjacency_list
'C': 1,
'D': 1
}
return H[n]
g[start_node] = 0
if n == None:
print('Path does not exist!')
return None
reconst_path.reverse()
print('Path found: {}'.format(reconst_path))
return reconst_path
if m in closed_list:
closed_list.remove(m)
open_list.add(m)
adjacency_list = {
'A': [('B', 1), ('C', 3), ('D', 7)],
'B': [('D', 5)],
'C': [('D', 12)]
}
graph1 = Graph(adjacency_list)
graph1.a_star_algorithm('A', 'D')
8
Output
6. Repeat the following steps until the open list is empty or the goal node is found:
a. If the current memory usage exceeds the maximum memory threshold, prune the
open list by removing the node with the highest f-score.
b. Pop the node with the lowest f-score from the open list and add it to the closed list.
c. If the popped node is the goal node, return the path.
d. Generate the successors of the popped node and calculate their f-scores.
e. Add the successors to the open list if they are not already in the closed list and if their
f-scores are lower than the maximum f-score seen so far.
f. Update the maximum f-score seen so far.
g. Update the current memory usage.
7. If the goal node was not found, return failure.
Program:
import heapq
class MemoryBoundedAStar:
def search(self):
# Initialize the frontier with the starting state
frontier = [(self.heuristic_fn(self.start_state), 0, self.start_state)] #
Initialize the explored set as an empty set
10
explored = set()
# Initialize the maximum memory used as zero
max_memory_used = 0
# Return failure if the frontier is empty and the goal state has not been found
return "failure"
if x > 0:
successors.append(((x-1, y), "left", 1))
if y > 0:
successors.append(((x, y-1), "down", 1))
return successors
Output
(8, 10)
Result
Thus the Informed search algorithms (A*, memory-bounded A*) are implemented successfully
using python
13
Exercise No: 3
Output
Accuracy: 97.78%
15
Output
['positive']
17
Output
Accuracy: 0.8861111111111111
Result
Thus the naïve bayes models are implemented successfully using python
19
Exercise No: 4
BAYESIAN NETWORKS
Aim
Output
True
Result
Thus the Bayesian network model is implemented successfully using python
21
Exercise No: 5
REGRESSION MODELS
Ai
m
To Build Regression
models
1. LINEAR
REGRESSION
Algorithm
1. Import libraries
2. Generate example data
3. Split data into training and testing sets
4. Create and fit the linear regression model
5. Make predictions on the test data
6. Calculate mean squared error (MSE)
7. Print coefficients and MSE
8. Plot the data and the linear regression line
Program
# Import libraries
import numpy as np
import matplotlib.pyplot as plt
from sklearn.linear_model import LinearRegression
from sklearn.model_selection import train_test_split
from sklearn.metrics import mean_squared_error
Output
Coefficients: [[2.9745886]]
Intercept: [0.64471706]
Mean Squared Error (MSE): 4.17373352627807
24
2. LOGISTIC
REGRESSION
Algorithm
1. Generate some example data
2. Split the data into training and testing sets
3. Train the model
4. Make predictions on the test data
5. Calculate accuracy of the model
6. Visualize the decision boundary
Program
import numpy as np
import matplotlib.pyplot as plt
from sklearn.linear_model import LogisticRegressio n
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score
# Create a grid of points to plot the decision boundar y
# Plot the decision boundary
plt.xlabel("Feature 1")
plt.ylabel("Feature 2")
Output
Accuracy: 1.0
27
3. BAYESIAN LINEAR
REGRESSION Algorithm
1. Generate synthetic data
2. Fit Bayesian linear regression
3. Predict using the trained model
4. Plot the data, true line, and predicted line with uncertainty
Program
import numpy as np
import matplotlib.pyplot as plt
from sklearn.linear_model import BayesianRidge
# Plot the data, true line, and predicted line with uncertainty
plt.scatter(X, y, color='blue', label='Data')
plt.plot(X_test.squeeze(), 2 * X_test.squeeze() + 1, color='green', label='True Line')
plt.plot(X_test.squeeze(), y_pred, color='red', label='Predicted Line')
plt.fill_between(X_test.squeeze(), y_pred -
y_std, y_pred + y_std, color='pink', alpha=0.5, label='Uncertainty')
plt.xlabel('X')
plt.ylabel('y')
plt.legend()
plt.show()
28
Output
Result
Thus the regression models are built successfully using python
29
Exercise No: 6
1. BUILD DECISION
TREES Algorithm
1. Collect a dataset
2. clean, normalize and transform the data
3. splitting the dataset into smaller subsets
4. building the decision tree by recursively splitting the dataset into smaller subsets
5. training the data
6. make predictions
7. Evaluate the model
Program
import pandas as pd
from sklearn.model_selection import train_test_split
from sklearn.tree import DecisionTreeClassifier
from sklearn.tree import DecisionTreeRegressor
from sklearn.metrics import accuracy_score, mean_squared_error
from sklearn.datasets import load_iris
iris = load_iris()
X = iris.data
y = iris.target
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
clf = DecisionTreeClassifier()
clf.fit(X_train, y_train)
y_pred = clf.predict(X_test)
accuracy = accuracy_score(y_test, y_pred)
print("Accuracy:", accuracy)
mse = mean_squared_error(y_test, y_pred)
print("Mean Squared Error:", mse)
fromsklearn.tree import plot_tree
import matplotlib.pyplot as plt
plt.figure(figsize=(12,
6)) plot_tree(clf,
filled=True) plt.show()
30
Output
Accuracy: 1.0
Mean Squared Error: 0.0
31
2. BUILD RANDOM
FORESTS Algorithm
1. Collect a dataset
2. clean, normalize and transform the data
3. splitting the dataset into smaller subsets
4. building the decision tree by recursively splitting the dataset into smaller subsets
5. Assemble the decision trees into a forest by training each tree on a different subset of
the training data
6. make predictions
7. Evaluate the model
Random Forests in `scikit-learn` (with N = 100)
Program
import matplotlib.pyplot as plt
from sklearn.datasets import load_iris
from sklearn import tree
import pandas as pd
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import train_test_split
filled = True);
fig.savefig('rf_individualtree.png')
# This may not the best way to view each estimator as it is small
fn=data.feature_names
cn=data.target_names
fig, axes = plt.subplots(nrows = 1,ncols = 5,figsize = (10,2),
dpi=900) for index in range(0, 5):
tree.plot_tree(rf.estimators_[index],
feature_names = fn,
class_names=cn,
filled = True,
ax = axes[index]);
Output
34
Result
Thus the decision trees and random forests are built successfully using python
35
Exercise No: 7
SVM MODELS
Ai
m
To Build SVM
models
Algorithm
1. Gather a dataset that includes both input features and the corresponding output values
that you want to predict
2. Clean, normalize, and transform the data into a format that can be used for building the
SVM model.
3. Divide the dataset into two subsets: one for training the SVM model and the other for
testing the model.
4. Select a kernel function that will be used to map the input features into a higher-
dimensional space where the data can be separated more easily. Common kernel
functions include Linear, Polynomial, Gaussian RBF, and Sigmoid
5. Train the SVM model on the training data
6. Evaluate the performance of the SVM model
7. Build an accuracy of SVM models
Program
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
X = np.array([[19, 19000], [35, 20000], [26, 43000], [27, 57000], [19, 76000], [27, 58000],[27,8400
0],[32,150000],[25,33000],[34,65000],[26,80000],[26,52000],[20,86000],[32,18000],[18,82000], [
29,80000]]) # age, salary
y = np.array([0,0,0,1,1,1,0,1,0,1,0,1,0,1,0,0]) #purchased
accuracy_score(y_test,y_pred)
from matplotlib.colors import ListedColormap
X_set, y_set = X_test, y_test
X1, X2 = np.meshgrid(np.arange(start = X_set[:, 0].min()
- 1, stop = X_set[:, 0].max() + 1, step = 0.01),
np.arange(start = X_set[:, 1].min() - 1, stop = X_set[:, 1].max() + 1, step = 0.01))
plt.contourf(X1, X2, classifier.predict(np.array([X1.ravel(),
X2.ravel()]).T).reshape(X1.shape),
alpha = 0.75, cmap = ListedColormap(('red', 'green')))
plt.xlim(X1.min(), X1.max())
plt.ylim(X2.min(), X2.max())
for i, j in enumerate(np.unique(y_set)):
plt.scatter(X_set[y_set == j, 0], X_set[y_set == j, 1], c = ListedColormap(('red', 'green'))(i), label =
j)
plt.title('SVM (Test set)')
plt.xlabel('Age')
plt.ylabel('Estimated Salary')
plt.legend()
plt.show()
37
Output
[[1 2]
[0 1]]
Result
Thus the SVM models are built successfully using python
38
Exercise No: 8
ENSEMBLING TECHNIQUES
Aim
Algorithms
1. Load dataset
2. Assume X is the feature matrix and y is the target variable
3. Split the data into training and testing sets
4. Create voting, stacking, bagging and boosting ensemble classifiers
5. Train the ensemble classifiers
6. Make predictions on test set
7. Calculate accuracy of the ensemble models
Program
# Import libraries
import numpy as np
import pandas as pd
from sklearn.model_selection import train_test_split
from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import BaggingClassifier, AdaBoostClassifier
from sklearn.metrics import accuracy_score
from sklearn.model_selection import cross_val_score
from sklearn.linear_model import LogisticRegression
from sklearn.ensemble import StackingClassifier
from sklearn.ensemble import VotingClassifier
from sklearn.svm import SVC
# Load dataset
# Assume X is the feature matrix and y is the target variable
X = np.array([[19, 19000], [35, 20000], [26, 43000], [27, 57000], [19, 76000], [27, 58000],[27,8400
0],[32,150000],[25,33000],[34,65000],[26,80000],[26,52000],[20,86000],[32,18000],[18,82000], [
29,80000]]) # age, salary
y = np.array([0,0,0,1,1,1,0,1,0,1,0,1,0,1,0,0]) #purchased
base_model2 = SVC(random_state=42)
base_model3 = LogisticRegression(random_state=42)
# Bagging Classifier
bagging_classifier = BaggingClassifier(base_classifier, n_estimators=10,
random_state=42) bagging_classifier.fit(X_train, y_train)
bagging_predictions = bagging_classifier.predict(X_test)
bagging_accuracy = accuracy_score(y_test, bagging_predictions)
print("Bagging Classifier Accuracy:", bagging_accuracy)
# AdaBoost Classifier
adaboost_classifier = AdaBoostClassifier(base_classifier, n_estimators=10, random_state=42)
adaboost_classifier.fit(X_train, y_train)
adaboost_predictions = adaboost_classifier.predict(X_test)
adaboost_accuracy = accuracy_score(y_test, adaboost_predictions)
print("AdaBoost Classifier Accuracy:", adaboost_accuracy)
# Stacking Classifier
# Create the base classifiers
base_classifier_1 =
DecisionTreeClassifier() base_classifier_2
= LogisticRegression()
Output
Result
Thus the ensembling techniques are implemented successfully using python
42
Exercise No: 9
CLUSTERING ALGORITHMS
Aim
1. GAUSSIAN MIXURE
MODELAlgorithm
1. Load the iris dataset
2. Select first two columns
3. Turn it into a dataframe
4. Plot the data
5. Fit the gmm model for the dataset which expresses the dataset as a mixture of 3
gaussian distribution
6. Assign a label to each sample
7. Plot three clusters in same plot
8. Print the converged log-likelihood value
9. Print the number of iterations needed for the log-likelihood value to converge
Program
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from pandas import DataFrame
from sklearn import datasets
from sklearn.mixture import GaussianMixture
Output
-1.4987505566235166
8
45
2. K MEANS CLUSTERING
ALGORITHM Algorithm
1. Generate some sample data
2. Initialize the KMeans object
3. Fit the K-means model to the data
4. Access the cluster centers
5. Print the cluster centers
6. Plot the original data points and the predicted labels
Program
import numpy as np
from sklearn.cluster import KMeans
Output
Cluster centers: [[ 4.92846083e+00 4.94352468e+00]
[-9.57681805e-04 1.42778668e-01]]
47
3. KNN
ALGORITHMS
Algorithm
1. Generate a dataset
2. Instantiate a KNN classifier
3. Train the KNN model
4. Predict the labels for the test data
5. Calculate the accuracy of the model
6. Plot the training and test data
Program
import numpy as np
import matplotlib.pyplot as plt
from sklearn.datasets import make_blobs
from sklearn.model_selection import train_test_split
from sklearn.neighbors import KNeighborsClassifier
from sklearn.metrics import accuracy_score
# Generate a toy dataset
X, y = make_blobs(n_samples=200, centers=3, random_state=42)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
# Instantiate a KNN classifier
k = 5 # Number of neighbors to consider
knn =
KNeighborsClassifier(n_neighbors=k) #
Train the KNN model
knn.fit(X_train, y_train)
# Predict the labels for the test data
y_pred = knn.predict(X_test)
# Calculate the accuracy of the model
accuracy = accuracy_score(y_test, y_pred)
print("Accuracy:", accuracy)
# Plot the training and test data
plt.scatter(X_train[:, 0], X_train[:, 1], c=y_train, cmap='viridis', label='Training Data')
plt.scatter(X_test[:, 0], X_test[:, 1], c=y_test, cmap='viridis', marker='x', label='Test Data')
plt.xlabel('Feature 1')
plt.ylabel('Feature 2')
plt.title('KNN
Classification') plt.legend()
plt.show()
48
Output
Accuracy: 1.0
49
4. EXPECTATION MAXIMIZATION
ALGORITHM Algorithm
1. Start with initial estimates of the model parameters
2. calculate the expected value of the latent variables (hidden variables)
3. Iterate between the E-step and M-step until the estimated model parameters converge to
a fixed point
4. Stop the algorithm when the convergence criterion is when a maximum number of
iterations has been reached.
Program
import numpy as np
print("Means:", means)
50
print("Variances:", variances)
51
Output
Result
Thus the clustering algorithms are implemented successfully using python
52
Exercise No: 10
Algorithm
1. Define the prior probabilities
2. Define the observed data
3. Initialize the model parameters
4. Run the EM Algorithm for 10 iterations
5. E-step: Compute the expected value
6. M-step: Update the model parameters based on the expected value
7. Print the final model parameters
Program
import numpy as np
# M-step: Update the model parameters based on the expected value of burglary
burglary = np.mean(p_burglary_given_alarm)
53
not_burglary = 1 - burglary
alarm_given_burglary = np.sum(p_burglary_given_alarm) / np.sum(burglary)
alarm_given_not_burglary = np.sum(1 - p_burglary_given_alarm) / np.sum(not_burglary)
Output
P(Burglary) = 0.0902
P(Not Burglary) =
0.9098
P(Alarm | Burglary) = 1.0000
P(Alarm | Not Burglary) =
1.0000
55
Result
Thus the EM for Bayesian networks are implemented successfully using python
56
Exercise No: 11
SIMPLE NN MODELS
Ai
m
To build simple NN
models
Algorithm
1. Define the problem
2. Gather and prepare data
3. Decide on the architecture of your neural network. This includes the number of layers,
the number of neurons in each layer, and the activation functions (ReLU) used.
4. Initialize the weights of the neural network randomly.
5. Forward propagation: Pass the input data through the neural network, and calculate the
output of the network using the weights.
6. Calculate the error: Calculate the difference between the predicted output of the neural
network and the actual output.
7. Backpropagation: Propagate the error back through the network, adjusting the weights to
minimize the error.
8. Repeat steps 5-7 with the same data until the error is minimized.
9. Validate and test the network to make accurate predictions.
Program
from keras.models import Sequential
from keras.layers import Dense
import numpy as np
X = np.array([[0, 0], [0, 1], [1, 0], [1, 1]])
y = np.array([[0], [1], [1], [0]])
model = Sequential()
model.add(Dense(4, input_dim=2, activation='relu'))
model.add(Dense(1, activation='sigmoid'))
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
model.fit(X, y,epochs=1000,batch_size=4)
print(model.predict(np.array([[0, 0], [0, 1], [1, 0], [1, 1]])))
57
Output
Epoch 231/1000
1/1 [==============================] - 0s 10ms/step - loss: 0.6894 - accuracy: 0.5000
Epoch 232/1000
1/1 [==============================] - 0s 11ms/step - loss: 0.6892 - accuracy: 0.5000
Epoch 233/1000
1/1 [==============================] - 0s 9ms/step - loss: 0.6891 - accuracy: 0.5000
Epoch 594/1000
1/1 [==============================] - 0s 8ms/step - loss: 0.5975 - accuracy: 0.7500
Epoch 595/1000
1/1 [==============================] - 0s 8ms/step - loss: 0.5971 - accuracy: 0.7500
.
.
.
.
Epoch 996/1000
1/1 [==============================] - 0s 10ms/step - loss: 0.4194 - accuracy: 1.0000
Epoch 997/1000
1/1 [==============================] - 0s 10ms/step - loss: 0.4190 - accuracy: 1.0000
Epoch 998/1000
1/1 [==============================] - 0s 10ms/step - loss: 0.4186 - accuracy: 1.0000
Epoch 999/1000
1/1 [==============================] - 0s 9ms/step - loss: 0.4181 - accuracy: 1.0000
Epoch 1000/1000
1/1 [==============================] - 0s 9ms/step - loss: 0.4177 - accuracy:
1.0000 [[0.4279034 ]
[0.6979224 ]
[0.6604798 ]
[0.28546384]]
Result
Thus the simple NN models are built successfully using python
58
Exercise No: 12
model.compile(loss='binary_crossentropy',
optimizer='adam',
metrics=['accuracy'])
model.fit(X, y,
epochs=1000,
batch_size=4,
verbose=1)
print(model.predict(np.array([[0, 0], [0, 1], [1, 0], [1, 1]])))
59
Output
Epoch 86/1000
1/1 [==============================] - 0s 10ms/step - loss: 0.6745 - accuracy: 0.5000
Epoch 87/1000
1/1 [==============================] - 0s 13ms/step - loss: 0.6739 - accuracy: 0.5000
Epoch 88/1000
1/1 [==============================] - 0s 12ms/step - loss: 0.6732 - accuracy: 0.5000
Epoch 89/1000
1/1 [==============================] - 0s 10ms/step - loss: 0.6726 - accuracy: 0.5000
Epoch 90/1000
1/1 [==============================] - 0s 14ms/step - loss: 0.6720 - accuracy: 0.5000.
.
.
Epoch 234/1000
1/1 [==============================] - 0s 7ms/step - loss: 0.5759 - accuracy: 0.7500
Epoch 235/1000
1/1 [==============================] - 0s 8ms/step - loss: 0.5751 - accuracy: 0.7500
Epoch 236/1000
1/1 [==============================] - 0s 9ms/step - loss: 0.5742 - accuracy: 0.7500
Epoch 237/1000
1/1 [==============================] - 0s 9ms/step - loss: 0.5734 - accuracy: 0.7500
.
.
.
Epoch 996/1000
1/1 [==============================] - 0s 9ms/step - loss: 0.0422 - accuracy: 1.0000
Epoch 997/1000
1/1 [==============================] - 0s 9ms/step - loss: 0.0421 - accuracy: 1.0000
Epoch 998/1000
1/1 [==============================] - 0s 9ms/step - loss: 0.0419 - accuracy: 1.0000
Epoch 999/1000
1/1 [==============================] - 0s 9ms/step - loss: 0.0418 - accuracy: 1.0000
Epoch 1000/1000
1/1 [==============================] - 0s 9ms/step - loss: 0.0417 - accuracy: 1.0000
1/1 [==============================] - 0s
89ms/step [[0.09045944]
[0.9738202 ]
[0.97509986]
[0.01960859]]
60
2. CONVOLUTIONAL NEURAL
NETWORK (CNN) Algorithm
1. Set random seed for reproducibility
2. Define the input shape of our images
3. Define the input shape of our images
4. Define the input shape of our images
5. Define the input shape of our images
6. Add the first convolutional layer with 32 filters, a 3x3 kernel, and ReLU
activation function
7. Add the first max pooling layer with a 2x2 pool size
8. Add the second convolutional layer with 64 filters and a 3x3 kernel
9. Add the second max pooling layer with a 2x2 pool size
10. Flatten the output from the convolutional layers
11. Add a fully connected layer with 128 units and a ReLU activation function
12. Compile the model with categorical cross-entropy loss function and Adam optimizer
13. Print the model summary
Program
import numpy as np
from keras.models import Sequential
from keras.layers import Conv2D, MaxPooling2D, Flatten, Dense
# Add the first convolutional layer with 32 filters, a 3x3 kernel, and ReLU activation function
model.add(Conv2D(32, kernel_size=(3, 3), activation='relu', input_shape=input_shape))
# Add the second convolutional layer with 64 filters and a 3x3 kernel
model.add(Conv2D(64, kernel_size=(3, 3), activation='relu'))
61
# Add a fully connected layer with 128 units and a ReLU activation function
model.add(Dense(128, activation='relu'))
# Add the output layer with num_classes units and a softmax activation function
model.add(Dense(num_classes, activation='softmax'))
# Compile the model with categorical cross-entropy loss function and Adam optimizer
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
Output
Model: "sequential_8"
_
Layer (type) Output Shape Param #
=================================================================
conv2d (Conv2D) (None, 26, 26, 32) 320
=================================================================
Total params: 225,034
Trainable params: 225,034
Non-trainable params: 0
63
3. RECURRENT NEURAL
NETWORK (RNN) Algorithm
1. Set random seed for reproducibility
2. Define input sequence
3. Define output sequence
4. Define model architecture
5. Compile model
6. Train model
7. Test model
Program
# Import necessary libraries
import numpy as np
from keras.models import Sequential
from keras.layers import Dense, SimpleRNN
# Compile model
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
# Train model
model.fit(input_sequence.reshape(5, 3, 1), output_sequence, epochs=50, verbose=2)
# Test model
test_sequence = np.array([[0, 1, 1], [1, 1, 1], [0, 0, 1]])
predictions = model.predict(test_sequence.reshape(3, 3, 1))
model.summary()
64
Output
Epoch 1/50
1/1 - 2s - loss: 0.6728 - accuracy: 0.6000 - 2s/epoch - 2s/step
Epoch 2/50
1/1 - 0s - loss: 0.6723 - accuracy: 0.6000 - 9ms/epoch - 9ms/step
Epoch 3/50
1/1 - 0s - loss: 0.6718 - accuracy: 0.6000 - 8ms/epoch - 8ms/step
Epoch 4/50
1/1 - 0s - loss: 0.6713 - accuracy: 0.6000 - 9ms/epoch - 9ms/step
Epoch 5/50
1/1 - 0s - loss: 0.6708 - accuracy: 0.6000 - 8ms/epoch - 8ms/step
Epoch 6/50
1/1 - 0s - loss: 0.6703 - accuracy: 0.6000 - 9ms/epoch - 9ms/step
.
.
1/1 - 0s - loss: 0.6539 - accuracy: 0.6000 - 13ms/epoch - 13ms/step
Epoch 48/50
1/1 - 0s - loss: 0.6536 - accuracy: 0.6000 - 9ms/epoch - 9ms/step
Epoch 49/50
1/1 - 0s - loss: 0.6533 - accuracy: 0.6000 - 8ms/epoch - 8ms/step
Epoch 50/50
1/1 - 0s - loss: 0.6530 - accuracy: 0.6000 - 8ms/epoch - 8ms/step
1/1 [==============================] - 0s 179ms/step
Model: "sequential_9"
_
Layer (type) Output Shape Param #
=================================================================
simple_rnn (SimpleRNN) (None, 4) 24
=================================================================
Total params: 29
Trainable params: 29
Non-trainable params: 0
Result
Thus the deep learning NN models are built successfully using python
Exercise:13
Implement k-Nearest Neighbors(K-NN) for classification
Aim:
To Implement k-Nearest Neighbors(K-NN) for classification
Alogrithm:
1. Import Libraries: You'll need libraries like NumPy and optionally Matplotlib for
visualization.
2. Prepare the Dataset: Use a dataset for testing. The Iris dataset is a common choice.
3. Distance Calculation: Implement a function to calculate the distance between points.
4. Finding Neighbors: Write a function to find the k-nearest neighbors.
5. Prediction: Implement a function to predict the class of a new point based on its
neighbors.
Program:
import numpy as np
from collections import Counter
from sklearn.datasets import load_iris
from sklearn.model_selection import train_test_split
class KNN:
def __init__(self, k=3):
self.k = k
# Make predictions
predictions = model.predict(X_test)
# Calculate accuracy
accuracy = np.mean(predictions == y_test)
print(f'Predictions: {predictions}')
print(f'True Labels: {y_test}')
print(f'Accuracy: {accuracy:.2f}')
Output:
Predictions: [0 2 0 1 0]
True Labels: [0 2 0 1 0]
Accuracy: 1.00
Result:
Thus k-Nearest Neighbors(K-NN) are implemented successfully using python
Exericse:14
Build a Convolution Neural Network(CNN) for image classification
Aim:
To implement Build a Convolution Neural Network(CNN) for image classification
Alogrithm:
1 Install Required Libraries
2 Import Necessary Libraries
3 Load and Preprocess the Dataset
4 Build the CNN Model
5Compile the Model
6Train the Model
7 Evaluate the Model
8 Visualize Training History
9 Make Predictions
Program:
import tensorflow as tf
from tensorflow.keras import layers, models
from tensorflow.keras.datasets import cifar10
import matplotlib.pyplot as plt
# Load CIFAR-10 dataset
(x_train, y_train), (x_test, y_test) = cifar10.load_data()
Result:
Thus Convolution Neural Network(CNN) for image classification implemented
successfully using python