CS3491 - ai&ml
CS3491 - ai&ml
No :1( a)
Implementation of Uniformed Search Algorithms (BFS& DFS)
D at e:
Aim:
To implements the simple uniformed search algorithm breadth first search methods using
python
Procedure:
1. Start by putting any one of the graph’s vertices at the back of the queue.
2. Now take the front item of the queue and add it to the visited list.
3. Create a list of that vertex's adjacent nodes. Add those which are not within the visited
list to the rear of the queue.
4. Keep continuing steps two and three till the queue is empty.
Program:
class Graph:
self.edges = {}
self.directed = directed
neighbors.add(node2)
self.edges[node1] = neighbors
Page | 1
found, fringe, visited, came_from = False, deque([start]), set([start]), {start: None}
print(' ')
current = fringe.pop()
print('{:11s}'.format(current), end=' |
self.neighbors(current):
if node not in visited: visited.add(node); fringe.appendleft(node); came_from[node] =
current
print(', '.join(fringe))
@st at icmet ho d
parent = came_from[goal]
if parent:
Graph.print_path(came_from, parent)
return str(self.edges)
graph = Graph(directed=False)
graph.add_edge('A', 'B')
graph.add_edge('A', 'S')
graph.add_edge('S', 'G')
Page | 2
graph.add_edge('S', 'C')
graph.add_edge('C', 'F')
graph.add_edge('G', 'F')
graph.add_edge('C', 'D')
graph.add_edge('C', 'E')
graph.add_edge('E', 'H')
graph.add_edge('G', 'H')
Output:
- |A
A | S, B
B |S
S | C, G
G | H, F, C
C | E, D, H, F
F | E, D, H
H |
Path: A => S => G => H
Result:
Thus, the program for breadth first search was executed and output is verified.
Page | 3
www.BrainKart.com
E x.No :1( b)
Aim:
To implements the simple uniformed search algorithm depth first search methods using
python
Procedure:
1. Start by putting any one of the graph's vertex on top of the stack.
2. After that take the top item of the stack and add it to the visited list of the vertex.
3. Next, create a list of that adjacent node of the vertex. Add the ones which aren't in the
visited list of vertexes to the top of the stack.
4. Lastly, keep repeating steps 2 and 3 until the stack is empty.
Program:
class Graph:
self.edges = {}
self.directed = directed
neighbors.add(node2)
self.edges[node1] = neighbors
Page | 4
def breadth_first_search(self, start, goal):
print(' ')
current = fringe.pop()
print('{:11s}'.format(current), end=' |
self.neighbors(current):
if node not in visited: visited.add(node); fringe.append(node); came_from[node] =
current
print(', '.join(fringe))
@st at icmet ho d
parent = came_from[goal]
if parent:
Graph.print_path(came_from, parent)
return str(self.edges)
graph = Graph(directed=False)
Page | 5
graph.add_edge('A', 'B')
graph.add_edge('A', 'S')
graph.add_edge('S', 'G')
graph.add_edge('S', 'C')
graph.add_edge('C', 'F')
graph.add_edge('G', 'F')
graph.add_edge('C', 'D')
graph.add_edge('C', 'E')
graph.add_edge('E', 'H')
graph.add_edge('G', 'H')
Output:
- |A
A | B, S
S | B, C, G
G | B, C, F, H
H |
Path: A => S => G => H
Result:
Thus, the program for depth first search was executed and output is verified.
Page | 6
E x.No :2( a)
Implementation of Informed search algorithms (A*, memory-bounded A*)
D at e:
Aim:
To implements the simple informed search algorithm A* search methods using python
Procedure:
h(n) = heuristic_value
g(n) = actual_cost
f(n) = actual_cost + heursitic_value
f(n) = g(n) + h(n)
Program:
Page | 7
for v in open_set: # v='B'/'F'
if n == None or g[v] + heuristic(v) < g[n] + heuristic(n):
n = v # n='A'
#for each node m,compare its distance from start i.e g(m) to the
#from start through n node
else:
if g[m] > g[n] + weight:
#update g(m)
g[m] = g[n] + weight
#change parent of m to n
parents[m] = n
if n == None:
print('Path does not exist!')
return None
path.reverse()
Page | 8
return path
def heuristic(n):
H_dist = {
'A': 10,
'B': 8,
'C': 5,
'D': 7,
'E': 3,
'F': 6,
'G': 5,
'H': 3,
'I': 1,
'J': 0
}
return H_dist[n]
Page | 9
}
aStarAlgo('A', 'J')
Output:
Result:
Thus, the program for A* search was executed and output is verified.
Page | 10
E x.No :2( b)
Implementation of informed search Algorithms (AO* Search)
D at e:
Aim:
To implements the simple informed search algorithm AO* search methods using python
Procedure:
Step1: Proceeds life A*, expands best leaf until memory is full.
Step2: Cannot add new node without dropping an old one. (Always drops worst one)
Step3: Expands the best leaf and deletes the worst leaf.
Step4: If all have same f-value-selects same node for expansion and deletion.
Step4: SMA* is complete if any reachable solution.
Program:
class Graph:
def init (self, graph, heuristicNodeList, startNode): #instantiate graph object with graph
topology, heuristic values, start node
self.graph = graph
self.H=heuristicNodeList
self. st art=st artNode
self.parent={}
self.status={}
self.solutionGraph={}
Page | 11
self.status[v]=val
def printSolution(self):
print("FOR GRAPH SOLUTION, TRAVERSE THE GRAPH FROM THE
STARTNODE:",self.start)
print(" ")
print(self.solutionGraph)
print(" ")
if flag==True: # initialize Minimum Cost with the cost of first set of child node/s
minimu mCo st=co st
costToChildNodeListDict[minimumCost]=nodeList # set the Minimum Cost child node/s
flag=False
else: # checking the Minimum Cost nodes with the current Minimum Cost
if minimumCost>cost:
minimu mCo st=co st
costToChildNodeListDict[minimumCost]=nodeList # set the Minimum Cost child node/s
Page | 12
def aoStar(self, v, backTracking): # AO* algorithm for a start node and backTracking status
flag
print(" ")
if solved==True: # if the Minimum Cost nodes of v are solved, set the current node status
as solved(-1)
self.setStatus(v,-1)
self.solutionGraph[v]=childNodeList # update the solution graph with the solved nodes which
may be a part of solution
if v!=self.start: # check the current node is the start node for backtracking the current
node value
self.aoStar(self.parent[v], True) # backtracking the current node value with backtracking status
set to true
Page | 13
h1 = {'A': 1, 'B': 6, 'C': 2, 'D': 12, 'E': 2, 'F': 1, 'G': 5, 'H': 7, 'I': 7, 'J':1, 'T': 3}
graph1 = {
'A': [[('B', 1), ('C', 1)], [('D', 1)]],
'B': [[('G', 1)], [('H', 1)]],
'C': [[('J', 1)]],
'D': [[('E', 1), ('F', 1)]],
'G': [[('I', 1)]]
} G1= Graph(graph1, h1,
'A') G1.applyAOStar() G
1. pr int So lut io n()
h2 = {'A': 1, 'B': 6, 'C': 12, 'D': 10, 'E': 4, 'F': 4, 'G': 5, 'H': 7} # Heuristic values of Nodes
graph2 = { # Graph of Nodes and Edges
'A': [[('B', 1), ('C', 1)], [('D', 1)]], # Neighbors of Node 'A', B, C & D with repective weights
'B': [[('G', 1)], [('H', 1)]], # Neighbors are included in a list of lists
'D': [[('E', 1), ('F', 1)]] # Each sublist indicate a "OR" node or "AND" nodes
}
G2 = Graph(graph2, h2, 'A') # Instantiate Graph object with graph, heuristic values and start
Node
G2.applyAOStar() # Run the AO* algorithm
G2.printSolution() # print the solution graph as AO* Algorithm search
OUTPUT:
HEURISTIC VALUES : {'A': 1, 'B': 6, 'C': 2, 'D': 12, 'E': 2, 'F': 1, 'G': 5, 'H': 7, 'I': 7, 'J': 1, 'T': 3}
SOLUTION GRAPH : {}
PROCESSING NODE : A
HEURISTIC VALUES : {'A': 10, 'B': 6, 'C': 2, 'D': 12, 'E': 2, 'F': 1, 'G': 5, 'H': 7, 'I': 7, 'J': 1, 'T':
3}
SOLUTION GRAPH : {}
PROCESSING NODE : B
HEURISTIC VALUES : {'A': 10, 'B': 6, 'C': 2, 'D': 12, 'E': 2, 'F': 1, 'G': 5, 'H': 7, 'I': 7, 'J': 1, 'T':
3}
SOLUTION GRAPH : {}
Page | 14
PROCESSING NODE : A
HEURISTIC VALUES : {'A': 10, 'B': 6, 'C': 2, 'D': 12, 'E': 2, 'F': 1, 'G': 5, 'H': 7, 'I': 7, 'J': 1, 'T':
3}
SOLUTION GRAPH : {}
PROCESSING NODE : G
HEURISTIC VALUES : {'A': 10, 'B': 6, 'C': 2, 'D': 12, 'E': 2, 'F': 1, 'G': 8, 'H': 7, 'I': 7, 'J': 1, 'T':
3}
SOLUTION GRAPH : {}
PROCESSING NODE : B
HEURISTIC VALUES : {'A': 10, 'B': 8, 'C': 2, 'D': 12, 'E': 2, 'F': 1, 'G': 8, 'H': 7, 'I': 7, 'J': 1, 'T':
3}
SOLUTION GRAPH : {}
PROCESSING NODE : A
HEURISTIC VALUES : {'A': 12, 'B': 8, 'C': 2, 'D': 12, 'E': 2, 'F': 1, 'G': 8, 'H': 7, 'I': 7, 'J': 1, 'T':
3}
SOLUTION GRAPH : {}
PROCESSING NODE : I
HEURISTIC VALUES : {'A': 12, 'B': 8, 'C': 2, 'D': 12, 'E': 2, 'F': 1, 'G': 8, 'H': 7, 'I': 0, 'J': 1, 'T':
3}
SOLUTION GRAPH : {'I': []}
PROCESSING NODE : G
HEURISTIC VALUES : {'A': 12, 'B': 8, 'C': 2, 'D': 12, 'E': 2, 'F': 1, 'G': 1, 'H': 7, 'I': 0, 'J': 1, 'T':
3}
SOLUTION GRAPH : {'I': [], 'G': ['I']}
PROCESSING NODE : B
HEURISTIC VALUES : {'A': 12, 'B': 2, 'C': 2, 'D': 12, 'E': 2, 'F': 1, 'G': 1, 'H': 7, 'I': 0, 'J': 1, 'T':
3}
SOLUTION GRAPH : {'I': [], 'G': ['I'], 'B': ['G']}
PROCESSING NODE : A
HEURISTIC VALUES : {'A': 6, 'B': 2, 'C': 2, 'D': 12, 'E': 2, 'F': 1, 'G': 1, 'H': 7, 'I': 0, 'J': 1, 'T': 3}
SOLUTION GRAPH : {'I': [], 'G': ['I'], 'B': ['G']}
PROCESSING NODE : C
HEURISTIC VALUES : {'A': 6, 'B': 2, 'C': 2, 'D': 12, 'E': 2, 'F': 1, 'G': 1, 'H': 7, 'I': 0, 'J': 1, 'T': 3}
Page | 15
SOLUTION GRAPH : {'I': [], 'G': ['I'], 'B': ['G']}
PROCESSING NODE : A
HEURISTIC VALUES : {'A': 6, 'B': 2, 'C': 2, 'D': 12, 'E': 2, 'F': 1, 'G': 1, 'H': 7, 'I': 0, 'J': 1, 'T': 3}
SOLUTION GRAPH : {'I': [], 'G': ['I'], 'B': ['G']}
PROCESSING NODE : J
HEURISTIC VALUES : {'A': 6, 'B': 2, 'C': 2, 'D': 12, 'E': 2, 'F': 1, 'G': 1, 'H': 7, 'I': 0, 'J': 0, 'T': 3}
SOLUTION GRAPH : {'I': [], 'G': ['I'], 'B': ['G'], 'J': []}
PROCESSING NODE : C
HEURISTIC VALUES : {'A': 6, 'B': 2, 'C': 1, 'D': 12, 'E': 2, 'F': 1, 'G': 1, 'H': 7, 'I': 0, 'J': 0, 'T': 3}
SOLUTION GRAPH : {'I': [], 'G': ['I'], 'B': ['G'], 'J': [], 'C': ['J']}
PROCESSING NODE : A
{'I': [], 'G': ['I'], 'B': ['G'], 'J': [], 'C': ['J'], 'A': ['B', 'C']}
HEURISTIC VALUES : {'A': 1, 'B': 6, 'C': 12, 'D': 10, 'E': 4, 'F': 4, 'G': 5, 'H': 7}
SOLUTION GRAPH : {}
PROCESSING NODE : A
HEURISTIC VALUES : {'A': 11, 'B': 6, 'C': 12, 'D': 10, 'E': 4, 'F': 4, 'G': 5, 'H': 7}
SOLUTION GRAPH : {}
PROCESSING NODE : D
HEURISTIC VALUES : {'A': 11, 'B': 6, 'C': 12, 'D': 10, 'E': 4, 'F': 4, 'G': 5, 'H': 7}
SOLUTION GRAPH : {}
PROCESSING NODE : A
HEURISTIC VALUES : {'A': 11, 'B': 6, 'C': 12, 'D': 10, 'E': 4, 'F': 4, 'G': 5, 'H': 7}
SOLUTION GRAPH : {}
PROCESSING NODE : E
HEURISTIC VALUES : {'A': 11, 'B': 6, 'C': 12, 'D': 10, 'E': 0, 'F': 4, 'G': 5, 'H': 7}
SOLUTION GRAPH : {'E': []}
PROCESSING NODE : D
HEURISTIC VALUES : {'A': 11, 'B': 6, 'C': 12, 'D': 6, 'E': 0, 'F': 4, 'G': 5, 'H': 7}
SOLUTION GRAPH : {'E': []}
PROCESSING NODE : A
Page | 16
HEURISTIC VALUES : {'A': 7, 'B': 6, 'C': 12, 'D': 6, 'E': 0, 'F': 4, 'G': 5, 'H': 7}
SOLUTION GRAPH : {'E': []}
PROCESSING NODE : F
HEURISTIC VALUES : {'A': 7, 'B': 6, 'C': 12, 'D': 6, 'E': 0, 'F': 0, 'G': 5, 'H': 7}
SOLUTION GRAPH : {'E': [], 'F': []}
PROCESSING NODE : D
HEURISTIC VALUES : {'A': 7, 'B': 6, 'C': 12, 'D': 2, 'E': 0, 'F': 0, 'G': 5, 'H': 7}
SOLUTION GRAPH : {'E': [], 'F': [], 'D': ['E', 'F']}
PROCESSING NODE : A
Result:
Thus, the program for AO* search was executed and output is verified.
Page | 17
E x.No :3
Implement Navie Bayes Models
D at e:
Aim:
To implement navie bayes using navie classifier methods
Procedure:
Step 1 - Import basic libraries.
Step 2 - Importing the dataset.
Step 3 - Data preprocessing.
Step 4 - Training the model.
Step 5 - Testing and evaluation of the model.
Step 6 - Visualizing the model.
Program:
# import necessary libraries
importpandasaspd
fromsklearnimporttree
fromsklearn.preprocessingimportLabelEncoder
fromsklearn.naive_bayesimportGaussianNB
# Load Data from CSV
data=pd.read_csv('tennisdata.csv')
print("The first 5 Values of data is :\n", data.head())
Page | 18
print("\nThe First 5 values of the train data is\n", X.head())
y=data.iloc[:, -1]
print("\nThe First 5 values of train output is\n", y.head())
The First 5 values of train output is
0 No
1
No
2 Yes
3 Yes
4 Yes
Name: PlayTennis, dtype: object
Page | 19
le_PlayTennis=LabelEncoder()
y=le_PlayTennis.fit_transform(y)
print("\nNow the Train output is\n",y)
Now the Train output is
[0 0 1 1 1 0 1 0 1 1 1 1 1 0]
fromsklearn.model_selectionimporttrain_test_split
X_train, X_test, y_train, y_test=train_test_split(X,y, test_size=0.20)
classifier=GaussianNB()
classifier.fit(X_train, y_train)
fromsklearn.metricsimportaccuracy_score
print("Accuracy is:", accuracy_score(classifier.predict(X_test), y_test))
Output:
Result:
Thus, the naive baye program was executed and output is verified.
Page | 20
E x.No :4
Implement Bayesian Networks
D at e:
Aim:
To write a python program to find Bayesian networks
Procedure:
Page | 21
import pandas as pd
data=pd.read_csv("heartdisease.csv")
heart_disease=pd.DataFrame(data)
print(heart_disease)
age Gender Family diet Lifestyle cholestrolheartdisease
0 0 0 1 1 3 0 1
1 0 1 1 1 3 0 1
2 1 0 0 0 2 1 1
3 4 0 1 1 3 2 0
4 3 1 1 0 0 2 0
5 2 0 1 1 1 0 1
6 4 0 1 0 2 0 1
7 0 0 1 1 3 0 1
8 3 1 1 0 0 2 0
9 1 1 0 0 0 2 1
10 4 1 0 1 2 0 1
11 4 0 1 1 3 2 0
12 2 1 0 0 0 0 0
13 2 0 1 1 1 0 1
14 3 1 1 0 0 1 0
15 0 0 1 0 0 2 1
16 1 1 0 1 2 1 1
17 3 1 1 1 0 1 0
18 4 0 1 1 3 2 0
In [2]:
frompgmpy.modelsimportBayesianModel
model=BayesianModel([
('age','Lifestyle'),
('Gender','Lifestyle'),
('Family','heartdisease'),
('diet','cholestrol'),
( 'Life st yle ', 'd ie t '),
('cholestrol','heartdisease'),
('diet','cholestrol')
])
frompgmpy.estimatorsimportMaximumLikelihoodEstimator
model.fit(heart_disease, estimator=MaximumLikelihoodEstimator)
frompgmpy.inferenceimportVariableElimination
Heart Disease_infer =Var iableE liminat io n( model)
In [3]:
print('For age Enter { SuperSeniorCitizen:0, SeniorCitizen:1, MiddleAged:2, Youth:3, Teen:4 }')
print('For Gender Enter { Male:0, Female:1 }')
print('For Family History Enter { yes:1, No:0 }')
print('For diet Enter { High:0, Medium:1 }')
Page | 22
print('For lifeStyle Enter { Athlete:0, Active:1, Moderate:2, Sedentary:3 }')
print('For cholesterol Enter { High:0, BorderLine:1, Normal:2 }')
q =HeartDisease_infer.query(variables=['heartdisease'], evidence={
'age':int(input('Enter age :')),
'Gender':int(input('Enter Gender :')),
'Family':int(input('Enter Family history :')),
'diet':int(input('Enter diet :')),
'Lifestyle':int(input('Enter Lifestyle :')),
'cholestrol':int(input('Enter cholestrol :'))
})
print(q['heartdisease'])
Output:
For age Enter { SuperSeniorCitizen:0, SeniorCitizen:1, MiddleAged:2, Youth:3, Teen:4 }
For Gender Enter { Male:0, Female:1 }
For Family History Enter { yes:1, No:0 }
For diet Enter { High:0, Medium:1 }
For lifeStyle Enter { Athlete:0, Active:1, Moderate:2, Sedentary:3 }
For cholesterol Enter { High:0, BorderLine:1, Normal:2 }
Enter age :1
Enter Gender :1
Enter Family history :0
Enter diet :1
Enter Lifestyle :0
Enter cholestrol :1
+ + +
| heartdisease | phi(heartdisease) |
+================+=====================+
| heartdisease_0 | 0.0000 |
+ + +
| heartdisease_1 | 1.0000 |
+ + +
Result:
Thus, the Bayesian networks program was executed and output is verified.
Page | 23
E x.No :5( a)
Build regression models
D at e:
Aim:
To write a python program regression model using linear regression model
Procedure:
Step 1: Data Pre Processing
Importing The Libraries.
Importing the Data Set.
Encoding the Categorical Data.
Avoiding the Dummy Variable Trap.
Splitting the Data set into Training Set and Test Set.
Step 2: Fitting Multiple Linear Regression to the Training set
Step 3: Predict the Test set results.
Program:
import pandas as pd
importnumpyas np
fromsklearnimportlinear_model
In [2]:
df=pd.read_csv('homeprices.csv')
df
Out[2]:
area bedrooms age price
Page | 24
area bedrooms age price
In [3]:
df.bedrooms.median()
Out[3]:
4.0
In [5]:
df.bedrooms=df.bedrooms.fillna(df.bedrooms.median())
df
Out[5]:
area bedrooms age price
In [6]:
reg =linear_model.LinearRegression()
reg.fit(df.drop('price',axis='columns'),df.price)
Out[6]:
LinearRegression(copy_X=True, fit_intercept=True, n_jobs=None,
Page | 25
normalize=False)
In [7]:
r eg.co ef_
Out[7]:
array([ 112.06244194, 23388.88007794, -3231.71790863])
In [8]:
reg.intercept_
Out[8]:
221323.00186540408
Find price of home with 3000 sqr ft area, 3 bedrooms, 40 year old
In [9]:
reg.predict([[3000, 3, 40]])
Out[9]:
array([498408.25158031])
In [10]:
112.06244194*3000 + 23388.88007794*3 +-3231.71790863*40 + 221323.00186540384
Out[10]:
498408.25157402386
Find price of home with 2500 sqr ft area, 4 bedrooms, 5 year old
In [11]:
reg.predict([[2500, 4, 5]])
Out[11]:
array([578876.03748933])
Result:
Thus, the python program for regression model was executed successfully.
Page | 26
E x.No :5( b)
Build regression models (Logistics Regression Model)
D at e:
Aim:
To write a python program regression model using logistics regression model
Procedure:
Program:
import pandas as pd
from matplotlib importpyplotasplt
%matplotlib inline
In [16]:
df=pd.read_csv("insurance_data.csv")
df.head()
Out[16]:
age bought_insurance
0 22 0
1 25 0
2 47 1
Page | 27
age bought_insurance
3 52 0
4 46 1
In [17]:
plt.scatter(df.age,df.bought_insurance,marker='+',color='red')
Out[17]:
<matplotlib.collections.PathCollection at 0x20a8cb15d30>
In [18]:
fromsklearn.model_selectionimporttrain_test_split
In [29]:
X_train, X_test, y_train, y_test=train_test_split(df[['age']],df.bought_insurance,train_size=0.8)
In [30]:
X_test
Out[30]:
age
4 46
8 62
26 23
17 58
24 50
25 54
In [31]:
fromsklearn.linear_modelimportLogisticRegression
model =LogisticRegression()
In [66]:
model.fit(X_train, y_train)
Page | 28
C:\ProgramData\Anaconda3\lib\site-packages\sklearn\linear_model\logistic.py:433:
FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence
this warning. FutureWarning) Out[66]: LogisticRegression(C=1.0, class_weight=None,
dual=False, fit_intercept=True, intercept_scaling=1, max_iter=100, multi_class='warn',
n_jobs=None, penalty='l2', random_state=None, solver='warn', tol=0.0001, verbose=0,
warm_start=False)
In [9]:
X_test
Out[9]:
age
16 25
21 26
2 47
In [39]:
y_predicted=model.predict(X_test)
In [33]:
model.predict_proba(X_test)
Out[33]:
array([[0.40569485, 0.59430515],
[0.26002994, 0.73997006],
[0.63939494, 0.36060506],
[0.29321765, 0.70678235],
[0.36637568, 0.63362432],
[0.32875922, 0.67124078]])
In [34]:
model.score(X_test,y_test)
Out[34]:
1.0
In [40]:
y_predicted
Out[40]:
array([1, 1, 0, 1, 1, 1], dtype=int64)
In [37]:
X_test
Out[37]:
Page | 29
age
4 46
8 62
26 23
17 58
24 50
25 54
Page | 30
0.485 is less than 0.5 which means person with 35 age will not buy insurance
In [77]:
age = 43
prediction_function(age)
Out[77]:
0.568565299077705
0.485 is more than 0.5 which means person with 43 will buy the insurance
Result:
Thus, the python program for logistics regression model was executed successfully.
Page | 31
from sklearn.metrics import confusion_matrix
data=pd.read_csv('covid19.csv')
In[2]:
#printing first 5 rows
data.head()
Output[2]:
L
patie glob birt a co pro long infect infecti
se latit sta a
nt_i al_n h_y g un vin city ude itud ion_c on_or
x te be
d um ear e try ce e ase der
l
overs
1000 5 Gan 37.4 126. rel
m 196 Ko Seo eas
0 0000 2 0 gseo 604 4406 1.0 ea 0
ale 4 rea ul inflo
01 s -gu 59 80 s
w
ed
Jun overs
1000 3 37.4 126. rel
m 198 0 Ko Seo gna eas
1 0000 5 788 6685 1.0 eas 0
ale 7 s rea ul ng- inflo
02 32 58 ed
gu w
co nt a
1000 5 Jon
m 196 Ko Seo 37.5 126. ct rel
2 0000 6 0 gno- 621 8018 with 2.0 eas 0
03 s gu 43 84 patien ed
t
Page | 33
glo L
patie se birt a co pro long infect infecti
b latit sta a
nt_i h_y g un vin city ude itud ion_c on_or
al_n x te be
d ear e try ce e ase der
um l
2 overs
1000 199 37.5 127. rel
m K Seo Map eas
3 0000 7 0 674 0056 inflo 1.0 eas 0
ale 1 o ul o-gu 54 27 ed
04 s w
re
a
co nt a
fe Seo
1000 2 37.4 126. ct rel
199 Ko Seo ngb
4 0000 9 m 0 604 4406 with 2.0 eas 0
2 rea ul uk-
05 ale s 59 80 patien ed
gu
t
In[3]
#As all the columns are categorical, check for unique values of each column
for i in data.columns:
print(data[i].unique(),"\t",data[i].nunique())
Output[3]:
Page | 34
1000000115 1000000116 1000000117 1000000118 1000000119 1000000120
1000000121 1000000122 1000000123 1000000124 1000000125 1000000126
In[4]
data.info()
Out[4]:
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 160 entries, 0 to 159
Data columns (total 14 columns):
patient_id 160 non-null int64
global_num 160 non-null int64
sex 160 non-null object
birth_year 160 non-null int64
age 160 non-null object
country 160 non-null object
province 160 non-null object
city 160 non-null object
latitude 160 non-null float64
longitude 160 non-null float64
infection_case 160 non-null object
infection_order 18 non-null float64
state 160 non-null object
Label 160 non-null int64
dtypes: float64(3), int64(4), object(7)
memory usage: 17.6+ KB
In[5]
#Check how these unique categories are distributed among the columns
for i in data.columns:
print(data[i].value_counts())
print()
Out[5]:
1000000160 1
1000000159 1
1000000058 1
1000000057 1
1000000056 1
1000000055 1
1000000054 1
Page | 35
1000000053 1
1000000052 1
1000000051 1
1000000050 1
1000000049 1
1000000048 1
1000000047 1
1000000046 1
1000000045 1
1000000044 1
1000000043 1
1000000042 1
1000000059 1
1000000060 1
1000000061 1
1000000071 1
1000000078 1
1000000077 1
1000000076 1
1000000075 1
1000000074 1
1000000073 1
1000000072 1
.. 1
1000000090
1000000100 1
1000000089 1
1000000088 1
1000000087 1
1000000086 1
1000000085 1
1000000084 1
1000000083 1
1000000099 1
In[6]
#Heatmap of the columns on dataset with each other. It shows Pearson's correlation coefficient
of column w.r.t other columns.
fig=plt.figure(figsize=(10,6))
sns.heatmap(data.corr(),annot=True)
Out[6]:
<matplotlib.axes._subplots.AxesSubplot at 0x1e3673ac748>
Page | 36
<matplotlib.axes._subplots.AxesSubplot at 0x1e3673ac748>
In[7]
#As scikit-learn algorithms do not generally work with string values, I've converted string
categories to integers.
le=LabelEncoder()
for sex indata.columns:
data[sex]=le.fit_transform(data[sex])
for age indata.columns:
data[age]=le.fit_transform(data[age])
for country indata.columns:
data[country]=le.fit_transform(data[country])
for province indata.columns:
data[province]=le.fit_transform(data[province])
for city indata.columns:
data[city]=le.fit_transform(data[city])
forinfection_caseindata.columns:
data[infection_case]=le.fit_transform(data[infection_case])
for state indata.columns:
data[state]=le.fit_transform(data[state])
Page | 37
Out[7]:
Out[11]:
patie globa s birth a cou pro ci lati long infecti infectio st La
nt_i l_nu e _yea g ntr vinc t tud itud on_cas n_orde at be
d m x r e y e y e e e r e l
0 0 0 1 22 4 1 0 7 37 2 7 0 2 0
1
1 1 1 1 45 2 1 0 4 39 5 7 0 2 0
1
2 2 2 1 22 4 1 0 2 64 24 5 1 2 0
1
3 3 3 1 49 1 1 0 5 72 54 7 0 2 0
1
9
4 4 4 0 50 1 1 0 37 2 5 1 2 0
In[8]
fig=plt.figure(figsize=(10,6))
sns.heatmap(data.corr(),annot=True)
Out[8]:
<matplotlib.axes._subplots.AxesSubplot at 0x1e3674c6d30>
Page | 38
In[9]
#X is thedataframe containing input data / features
#y is the series which has results which are to be predicted.
X=data[data.columns[:-1]]
y=d at a[ 'Labe l']
X.head(2)
Out[9]:
s a cou st
patie birth ntr prov longi infectio infectio
nt_id global
_num e _year g ince tyci latit
ude at
x tude n_case n_order
e y e
1
0 0 0 22 4 1 0 7 37 2 7 0 2
1
1 1 1 1 45 2 1 0 39 5 7 0 2
4
In[10]
# Import train_test_split function
fromsklearn.model_selectionimporttrain_test_split
# Split dataset into training set and test set
X_train, X_test, y_train, y_test=train_test_split(X, y, test_size=0.33) # 70% training and 30%
test
fromsklearn.treeimportDecisionTreeClassifier
#Create a Gaussian Classifier
model=DecisionTreeClassifier()
#Train the model using the training sets y_pred=model.predict(X_test)
model.fit(X_train,y_train)
y_pred=model.predict(X_test)
#Import scikit-learn metrics module for accuracy calculation
fromsklearnimport metrics
# Model Accuracy, how often is the classifier correct?
print("Accuracy for Decision Tree:",metrics.accuracy_score(y_test, y_pred))
print(confusion_matrix(y_test, y_pred))
print(classification_report(y_test, y_pred))
Output[10]:
Accuracy for Decision Tree: 0.9433962264150944
[[18 0 0]
[ 1 32 2]
[ 0 0 0]]
precision recall f1-score support
Page | 39
0 0.95 1.00 0.97 18
1 1.00 0.91 0.96 35
2 0.00 0.00 0.00 0
accuracy 0.94 53
macro avg 0.65 0.64 0.64 53
weighted avg 0.98 0.94 0.96 53
print(cross_val_score(model,X,y,cv=10))
[0.94117647 0.875 0.9375 1. 0.9375 1.
0.9375 1. 1. 0.93333333]
Result:
Thus, the python program for decision tree was executed successfully.
Page | 40
Ex.No:6 (b)
Build Random Forest Tree
D at e:
Aim:
Procedure:
Program:
import numpy as np
import pandas as pd
Page | 41
from sklearn import preprocessing
data=pd.read_csv('covid19.csv')
In[2]:
data.head()
Output[2]:
glo L
patie se birt a co pro long infect infecti
b latit sta a
nt_i h_y g un vin city ude itud ion_c on_or
al_n x te be
d ear e try ce e ase der
um l
overs
1000 5 Gan 37.4 126. rel
m 196 Ko Seo eas
0 0000 2 0 gseo 604 4406 1.0 ea 0
ale 4 rea ul inflo
01 s -gu 59 80 s
w
ed
Jun overs
1000 3 37.4 126. rel
m 198 0 Ko Seo gna eas
1 0000 5 788 6685 1.0 eas 0
ale 7 s rea ul ng- inflo
02 32 58 ed
gu w
co nt a
1000 5 Jon
m 196 Ko Seo 37.5 126. ct rel
2 0000 6 0 gno- 621 8018 with 2.0 eas 0
03 s gu 43 84 patien ed
t
04 s 54 27 inflo ed
w
co nt a
fe Se
1000 199 2 37.4 126. ct rel
Ko Seo o
4 0000 9 m 0 rea ul 604 4406 with 2.0 ea 0
2 ng
05 ale s 59 80 patien s
b
t ed
uk-
gu
In[3]
#As all the columns are categorical, check for unique values of each column
for i in data.columns:
print(data[i].unique(),"\t",data[i].nunique())
Output[3]:
Page | 43
In[4]
data.info()
Out[4]:
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 160 entries, 0 to 159
Data columns (total 14 columns):
patient_id 160 non-null int64
global_num 160 non-null int64
sex 160 non-null object
birth_year 160 non-null int64
age 160 non-null object
country 160 non-null object
province 160 non-null object
city 160 non-null object
latitude 160 non-null float64
longitude 160 non-null float64
infection_case 160 non-null object
infection_order 18 non-null float64
state 160 non-null object
Label 160 non-null int64
dtypes: float64(3), int64(4), object(7)
memory usage: 17.6+ KB
In[5]
#Check how these unique categories are distributed among the columns
for i in data.columns:
print(data[i].value_counts())
print()
Out[5]:
1000000160 1
1000000159 1
1000000058 1
1000000057 1
1000000056 1
1000000055 1
1000000054 1
1000000053 1
1000000052 1
1000000051 1
1000000050 1
Page | 44
1000000049 1
1000000048 1
1000000047 1
1000000046 1
1000000045 1
1000000044 1
1000000043 1
1000000042 1
1000000059 1
1000000060 1
1000000061 1
1000000071 1
1000000078 1
1000000077 1
1000000076 1
1000000075 1
1000000074 1
1000000073 1
1000000072 1
.. 1
1000000090
1
1000000100
1
1000000089
1
1000000088
1
1000000087
1000000086 1
1000000085 1
1000000084 1
1000000083 1
1000000099 1
In[6]
#Heatmap of the columns on dataset with each other. It shows Pearson's correlation coefficient
of column w.r.t other columns.
fig=plt.figure(figsize=(10,6))
sns.heatmap(data.corr(),annot=True)
Out[6]:
<matplotlib.axes._subplots.AxesSubplot at 0x1e3673ac748>
Page | 45
<matplotlib.axes._subplots.AxesSubplot at 0x1e3673ac748>
In[7]
#As scikit-learn algorithms do not generally work with string values, I've converted string
categories to integers.
le=LabelEncoder()
for sex indata.columns:
data[sex]=le.fit_transform(data[sex])
for age indata.columns:
data[age]=le.fit_transform(data[age])
for country indata.columns:
data[country]=le.fit_transform(data[country])
for province indata.columns:
data[province]=le.fit_transform(data[province])
for city indata.columns:
data[city]=le.fit_transform(data[city])
forinfection_caseindata.columns:
data[infection_case]=le.fit_transform(data[infection_case])
for state indata.columns:
data[state]=le.fit_transform(data[state])
Page | 46
Out[7]:
Out[11]:
patie globa s birth a cou pro ci lati long infecti infectio st La
nt_i l_nu e _yea g ntr vinc t tud itud on_cas n_orde at be
d m x r e y e y e e e r e l
0 0 0 1 22 4 1 0 7 37 2 7 0 2 0
1
1 1 1 1 45 2 1 0 4 39 5 7 0 2 0
1
2 2 2 1 22 4 1 0 2 64 24 5 1 2 0
1
3 3 3 1 49 1 1 0 5 72 54 7 0 2 0
1
9
4 4 4 0 50 1 1 0 37 2 5 1 2 0
In[8]
fig=plt.figure(figsize=(10,6))
sns.heatmap(data.corr(),annot=True)
Out[8]:
<matplotlib.axes._subplots.AxesSubplot at 0x1e3674c6d30>
Page | 47
In[9]
#X is thedataframe containing input data / features
#y is the series which has results which are to be predicted.
X=data[data.columns[:-1]]
y=d at a[ 'Labe l']
X.head(2)
Out[9]:
s a cou st
patie global birth prov ci latit longi infectio infectio
at
nt_id _num e _year g ntr ince ty ude tude n_case n_order
x e y e
1 4 1
0 0 0 22 0 7 37 2 7 0 2
1
1 1 1 1 45 2 1 0 4 39 5 7 0 2
In[10]
clf=RandomForestClassifier(n_estimators=100)
clf.fit(X_train,y_train)
y_pred=clf.predict(X_test)
print(confusion_matrix(y_test, y_pred))
print(classification_report(y_test, y_pred))
Page | 48
Output[10]: Accuracy for random forest:
0.9056603773584906
[[15 4 0]
[ 0 33 0]
[ 0 1 0]]
precision recall f1-score support
Result:
Thus, the python program for random forest tree was executed successfully.
Page | 49
E x.No :7
Build SVM models
D at e:
Aim:
Procedure:
Page | 50
Program:
import numpy as np
import pandas as pd
data=pd.read_csv('covid19.csv')
In[2]:
data.head()
Output[2]:
glo L
patie se birt a co pro long infect infecti
b latit sta a
nt_i h_y g un vin city ude itud ion_c on_or
al_n x te be
d ear e try ce e ase der
um l
5 o ver s
1000 Gan 37.4 126. rel
m 196 Ko Se eas
0 0000 2 0 gseo 604 4406 1.0 ea 0
ale 4 rea o inflo
01 -gu 59 80 s
ul w
ed
Page | 51
glo L
patie se birt a co pro long infect infecti
b latit sta a
nt_i h_y g un vin city ude itud ion_c on_or
al_n x te be
d ear e try ce e ase der
um l
Jun overs
1000 3 37.4 126. rel
m 198 0 Ko Seo gna eas
1 0000 5 788 6685 1.0 eas 0
ale 7 s rea ul ng- inflo
02 32 58 ed
gu w
co nt a
1000 5 Jon
m 196 Ko Seo 37.5 126. ct rel
2 0000 6 0 gno- 621 8018 with 2.0 eas 0
03 s gu 43 84 patien ed
t
2 overs
1000 199 37.5 127. rel
m K Seo Map eas
3 0000 7 0 674 0056 1.0 ea 0
ale 1 o ul o-gu inflo
04 s 54 27 s
re w
ed
a
co nt a
fe Seo 37.4 126. ct rel
1000 2
199 Ko Seo ngb 604 4406 with 2.0 eas 0
4 0000 9 m 0
2 rea ul uk- 59 80 patien ed
05 ale s
gu
t
In[3]
#As all the columns are categorical, check for unique values of each column
for i in data.columns:
print(data[i].unique(),"\t",data[i].nunique())
Output[3]:
Page | 52
1000000043 1000000044 1000000045 1000000046 1000000047 1000000048
1000000049 1000000050 1000000051 1000000052 1000000053 1000000054
1000000055 1000000056 1000000057 1000000058 1000000059 1000000060
1000000061 1000000062 1000000063 1000000064 1000000065 1000000066
1000000067 1000000068 1000000069 1000000070 1000000071 1000000072
1000000073 1000000074 1000000075 1000000076 1000000077 1000000078
1000000079 1000000080 1000000081 1000000082 1000000083 1000000084
1000000085 1000000086 1000000087 1000000088 1000000089 1000000090
1000000091 1000000092 1000000093 1000000094 1000000095 1000000096
1000000097 1000000098 1000000099 1000000100 1000000101 1000000102
1000000103 1000000104 1000000105 1000000106 1000000107 1000000108
1000000109 1000000110 1000000111 1000000112 1000000113 1000000114
1000000115 1000000116 1000000117 1000000118 1000000119 1000000120
1000000121 1000000122 1000000123 1000000124 1000000125 1000000126
In[4]
data.info()
Out[4]:
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 160 entries, 0 to 159
Data columns (total 14 columns):
patient_id 160 non-null int64
global_num 160 non-null int64
sex 160 non-null object
birth_year 160 non-null int64
age 160 non-null object
country 160 non-null object
province 160 non-null object
city 160 non-null object
latitude 160 non-null float64
longitude 160 non-null float64
infection_case 160 non-null object
infection_order 18 non-null float64
state 160 non-null object
Label 160 non-null int64
dtypes: float64(3), int64(4), object(7)
memory usage: 17.6+ KB
In[5]
#Check how these unique categories are distributed among the columns
for i in data.columns:
print(data[i].value_counts())
Page | 53
print()
Out[5]:
1000000160 1
1000000159 1
1000000058 1
1000000057 1
1000000056 1
1000000055 1
1000000054 1
1000000053 1
1000000052 1
1000000051 1
1000000050 1
1000000049 1
1000000048 1
1000000047 1
1000000046 1
1000000045 1
1000000044 1
1000000043 1
1000000042 1
1000000059 1
1000000060 1
1000000061 1
1000000071 1
1000000078 1
1000000077 1
1000000076 1
1000000075 1
1000000074 1
1000000073 1
1000000072 1
1
.. 1
1000000090 1
1000000100 1
1000000089 1
1000000088 1
1000000087 1
1000000086 1
1000000085 1
1000000084 1
1000000083
1000000099
In[6]
Page | 54
#Heatmap of the columns on dataset with each other. It shows Pearson's correlation coefficient
of column w.r.t other columns.
fig=plt.figure(figsize=(10,6))
sns.heatmap(data.corr(),annot=True)
Out[6]:
<matplotlib.axes._subplots.AxesSubplot at 0x1e3673ac748>
<matplotlib.axes._subplots.AxesSubplot at 0x1e3673ac748>
In[7]
#As scikit-learn algorithms do not generally work with string values, I've converted string
categories to integers.
le=LabelEncoder()
for sex indata.columns:
data[sex]=le.fit_transform(data[sex])
for age indata.columns:
data[age]=le.fit_transform(data[age])
for country indata.columns:
data[country]=le.fit_transform(data[country])
for province indata.columns:
data[province]=le.fit_transform(data[province])
for city indata.columns:
Page | 55
data[city]=le.fit_transform(data[city])
forinfection_caseindata.columns:
data[infection_case]=le.fit_transform(data[infection_case])
for state indata.columns:
data[state]=le.fit_transform(data[state])
Out[7]:
Out[11]:
patie globa s birth a cou pro ci lati long infecti infectio st La
nt_i l_nu e _yea g ntr vinc t tud itud on_cas n_orde at be
d m x r e y e y e e e r e l
0 0 0 1 22 4 1 0 7 37 2 7 0 2 0
1 1 1 1 45 2 1 0 1 39 5 7 0 2 0
4
1
2 2 2 1 22 4 1 0 64 24 5 1 2 0
2
1
3 3 3 1 49 1 1 0 5 72 54 7 0 2 0
1
4 4 4 0 50 1 1 0 9 37 2 5 1 2 0
In[8]
fig=plt.figure(figsize=(10,6))
sns.heatmap(data.corr(),annot=True)
Out[8]:
<matplotlib.axes._subplots.AxesSubplot at 0x1e3674c6d30>
Page | 56
In[9]
#X is thedataframe containing input data / features
#y is the series which has results which are to be predicted.
X=data[data.columns[:-1]]
y=d at a[ 'Labe l']
X.head(2)
Out[9]:
s a cou st
patie global e birth prov longi infectio infectio
ince ci
g ntr latit
nt_id _num _year ty ude at
x e tude n_case n_order
y e
1 4
0 0 0 22 1 0 7 37 2 7 0 2
1
1 1 1 1 45 2 1 0 4 39 5 7 0 2
In[10]
Page | 57
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33) # 70% training and
30% test
from sklearn import svm
clf.fit(X_train, y_train)
y_pred = clf.predict(X_test)
#normalizer
print(confusion_matrix(y_test, y_pred))
print(classification_report(y_test, y_pred))
print(roc_auc_score(y_test, y_pred))
# summarize scores
Page | 58
pyplot.plot(ns_fpr, ns_tpr, linestyle='--', label='No
pyplot.show()
Output[10]:
Accuracy for Runlengthsvm: 0.9245283018867925
[[23 1 0]
[ 0 26 3]
[ 0 0 0]]
precision recall f1-score support
accuracy 0.92 5
macro avg 0.65 0.62 0.64 3
weighted avg 0.98 0.92 0.95 5
3
0.9813218390804598 5
No Skill: ROC AUC=0.500 3
SVM: ROC AUC=0.981
C:\Users\acer\Anaconda3\lib\site-
packages\sklearn\metrics\classification.py:1439: UndefinedMetricWarning:
Recall and F-score are ill-defined and being set to 0.0 in labels with no
true samples.
'recall', 'true', average, warn_for)
Page | 59
Result:
Thus, the python program for buildsvm models was executed successfully.
Page | 60
Ex.No:8
Implement Ensembling Techniques (K- Means)
Date:
Aim:
Procedure:
Step 1 - Import basic libraries.
Step 2 - Importing the dataset.
Step 3 - Data preprocessing.
Step 4 - Training the model.
Step 5 - Testing and evaluation of the model.
Step 6 - Visualizing the model.
Program:
fromsklearn.clusterimportKMeans
fromsklearnimportpreprocessing
fromsklearn.mixtureimportGaussianMixture
fromsklearn.datasetsimportload_iris
importsklearn.metricsassm
import pandas as pd
importnumpyas np
importmatplotlib.pyplotasplt
In [2]:
dataset=load_iris()
# print(dataset)
In [3]:
X=pd.DataFrame(dataset.data)
X.columns=['Sepal_Length','Sepal_Width','Petal_Length','Petal_Width']
y=pd.DataFrame(dataset.target)
y.columns=['Targets']
# print(X)
In [4]:
plt.figure(figsize=(14,7))
colormap=np.array(['red','lime','black'])
# REAL PLOT
plt.subplot(1,3,1)
plt.scatter(X.Petal_Length,X.Petal_Width,c=colormap[y.Targets],s=40)
plt.title('Real')
# K-PLOT
plt.subplot(1,3,2)
Page | 61
model=KMeans(n_clusters=3)
model.fit(X)
predY=np.choose(model.labels_,[0,1,2]).astype(np.int64)
plt.scatter(X.Petal_Length,X.Petal_Width,c=colormap[predY],s=40)
plt.title('KMeans')
# GMM PLOT
scaler=preprocessing.StandardScaler()
scaler.fit(X)
xsa=scaler.transform(X)
xs=pd.DataFrame(xsa,columns=X.columns)
gmm=GaussianMixture(n_components=3)
gmm.fit(xs)
y_cluster_gmm=gmm.predict(xs)
plt.subplot(1,3,3)
plt.scatter(X.Petal_Length,X.Petal_Width,c=colormap[y_cluster_gmm],s=40)
plt.title('GMM Classification')
Out[4]:
Text(0.5, 1.0, 'GMM Classification')
Result:
Thus, the python program for the ensemble techniques using k means plotting was executed
successfully.
Page | 62
E x.No :9
Implement Clustering Algorithms
D at e:
Aim:
Procedure:
Step 1 - Import basic libraries.
Step 2 - Importing the dataset.
Step 3 - Data preprocessing.
Step 4 - Training the model using k-Means.
Step 5 - Testing and evaluation of the model.
Step 6 - Visualizing the model.
Program:
fromsklearn.clusterimportKMeans
importpandasaspd
fromsklearn.preprocessingimportMinMaxScaler
frommatplotlibimportpyplotasplt
%matplotlibinline
In [2]:
df=pd.read_csv("income.csv")
df.head()
Out[2]:
0 Rob 27 70000
1 Michael 29 90000
2 Mohan 29 61000
Page | 63
Name Age Income($)
3 Ismail 28 60000
4 Kory 42 150000
In [3]:
plt.scatter(df.Age,df['Income($)'])
plt.xlabel('Age')
plt.ylabel('Income($)')
Out[3]:
Text(0, 0.5, 'Income($)')
In [4]:
km=KMeans(n_clusters=3)
y_predicted=km.fit_predict(df[['Age','Income($)']])
y_predicted
Out[4]:
array([0, 0, 2, 2, 1, 1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2, 2, 0, 0, 2])
In [5]:
df['cluster']=y_predicted
Page | 64
df.head()
Out[5]:
0 Rob 27 70000 0
1 Michael 29 90000 0
2 Mohan 29 61000 2
3 Ismail 28 60000 2
4 Kory 42 150000 1
In [6]:
km.cluster_centers_
Out[6]:
array([[3.40000000e+01, 8.05000000e+04],
[3.82857143e+01, 1.50000000e+05],
[3.29090909e+01, 5.61363636e+04]])
In [7]:
Page | 65
Preprocessing using min max scaler
In [8]:
scaler=MinMaxScaler()
scaler.fit(df[['Income($)']])
df['Income($)'] =scaler.transform(df[['Income($)']])
scaler.fit(df[['Age']])
df['Age'] =scaler.transform(df[['Age']])
In [9]:
Page | 66
Name Age Income($) cluster
In [10]:
plt.scatter(df.Age,df['Income($)'])
Out[10]:
<matplotlib.collections.PathCollection at 0x13943d5be20>
In [11]:
km=KMeans(n_clusters=3)
y_predicted=km.fit_predict(df[['Age','Income($)']])
y_predicted
Out[11]:
array([1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2])
In [12]:
df['cluster']=y_predicted
df.head()
Out[12]:
Page | 67
Name Age Income($) cluster
In [13]:
km.cluster_centers_Out[13]:
array([[0.72268908, 0.8974359 ],
[0.1372549 , 0.11633428],
[0.85294118, 0.2022792 ]])
In [16]:
df1=df[df.cluster==0]
df2=df[df.cluster==1]
df3=df[df.cluster==2]
plt.scatter(df1.Age,df1['Income($)'],color='green')
plt.scatter(df2.Age,df2['Income($)'],color='red')
plt.scatter(df3.Age,df3['Income($)'],color='black')
plt.scatter(km.cluster_centers_[:,0],km.cluster_centers_[:,1],color='purple',marker='*',label='centroid')
plt.legend()
Out[16]:
<matplotlib.legend.Legend at 0x13943d23190>
Page | 68
Elbow Plot
In [17]:
sse= []
k_rng=range(1,10)
forkink_rng:
km=KMeans(n_clusters=k)
km.fit(df[['Age','Income($)']])
sse.append(km.inertia_)
C:\Users\janaj\anaconda3\lib\site-packages\sklearn\cluster\_kmeans.py:881:
UserWarning: KMeans is known to have a memory leak on Windows with MKL, when
there are less chunks than available threads. You can avoid it by setting the
environment variable OMP_NUM_THREADS=1.
warnings.warn(
In [18]:
plt.xlabel('K')
plt.ylabel('Sum of squared error')
plt.plot(k_rng,sse)
Out[18]:
[<matplotlib.lines.Line2D at 0x13945f3d940>]
Page | 69
Result:
Thus, the python program for clustering algorithm using k-means was executed successfully.
Page | 70
Ex.No:10
Implement EM for Bayesian Networks
D at e:
Aim:
Procedure:
Step 1 - Import basic libraries.
Step 2 - Importing the dataset.
Step 3 - Data preprocessing.
Step 4 - Training the model using k-Means.
Step 5 - Testing and evaluation of the model.
Step 6 - Visualizing the model.
Program:
In [1]:
importmath
importnumpyasnp
importmatplotlib.pyplotasplt
Helper Functions
In [2]:
defnCr(n,r):
"""
Args:
n: int, the total number
r: int, the number of selected
Returns:
C_n^r, the combination number
"""
f=math.factorial
returnf(n)/f(r)/f(n-r)
defbinomial(x,n,p):
"""
The bionomial distribution C_n^xp^x (1-p)^(n-x)
Args:
n: int, the total number
Page | 71
x: int, the number of selected
p: float, the probability
Returns:
The bionomial probability C_n^xp^x (1-p)^(n-x)
"""
returnnCr(n,x)*(p**x)*(1-p)**(n-x)
deflog_likelihood(X,n,theta):
"""
Calculates the log likelihood function for the two coins problem.
Args:
X: np.array of shape (n_trials,), dtype int,
the observations (number of heads at each trial).
n: int, total number of tosses per trial.
theta: tuple of (lambda, pA, pB), where
- lambda: float, the prior probability of selecting coin A (=1/2)
- pA: float, coin A's probability of showing head
- pB: float, coin B's probability of showing head
Returns:
log-likelihood
f(theta) = sum_i log sum_ziP(xi, zi; theta)
= sum_ilog[ lam ( nCr(10, xi) pA^xi (1-pA)^(10-xi) )
+ (1-lam) ( nCr(10, xi) pB^xi (1-pB)^(10-xi)
) ]
"""
(lam,p1,p2)=theta
ll=0
forxinX:
ll+=np.log(
lam*binomial(x,n,p1)+(1-lam)*binomial(x,n,p2)
)
returnll
defELBO(X,n,Q,theta):
"""
Calculates the ELBO for the two coins problem.
Args:
X: np.array of shape (n_trials,), dtype int,
the observations (number of heads at each trial).
n: int, total number of tosses per trial.
Q: np.array of shape (n_trials, 2), dtype float,
the hidden posterior q(z) (z = A, B) computed in the E-step.
theta: tuple of (lambda, pA, pB), where
- lambda: float, the prior probability of selecting coin A (=1/2)
- pA: float, coin A's probability of showing head
- pB: float, coin B's probability of showing head
Returns:
ELBO (Evidence Lower Bound)
Page | 72
g(theta) = sum_isum_zi Q(zi) log( P(xi, zi; theta) / Q(zi) )
"""
(lam,p1,p2)=theta
elbo=0
fori,xinenumerate(X):
elbo+=Q[i,0]*np.log(lam*binomial(x,n,p1)/Q[i,0])
elbo+=Q[i,1]*np.log((1-lam)*binomial(x,n,p2)/Q[i,1])
returnelbo
defplot_coin_function(grid_fn,title,path=[]):
"""
Plots a function wrtpA, pB using 2D contours.
Reference: https://nbviewer.jupyter.org/github/eecs445-f16/umich-eecs445-
f16/blob/master/handsOn_lecture17_clustering-mixtures-
em/handsOn_lecture17_clustering-mixtures-em.ipynb#Problem:-implement-EM-for-
Coin-Flips
Args:
grid_fn: callable, a function that takes pA, pB as inputs
and returns the function value at that point.
title: string, title of the plot.
path: (optional) A list of tuple of (pA, pB) that are visited in the
EM iterations.
Visualized as line segments if not empty.
Returns:
Shows the figure and returns None.
"""
xvals=np.linspace(0.01,0.99,100)
yvals=np.linspace(0.01,0.99,100)
xx,yy=np.meshgrid(xvals,yvals)
grid=np.zeros([len(xvals),len(yvals)])
foriinrange(len(xvals)):
forjinrange(len(yvals)):
grid[j,i]=grid_fn(xvals[i],yvals[j])
plt.figure(figsize=(6,4.5),dpi=100)
C=plt.contour(xx,yy,grid,1000)
cbar=plt.colorbar(C)
plt.title(title,fontsize=15)
plt.xlabel(r"$p_A$",fontsize=12)
plt.ylabel(r"$p_B$",fontsize=12)
ifpath:
p1,p2=zip(*path)
plt.plot(p1,p2,'g+-')
plt.text(p1[0]-0.15,p2[0]-
0.05,'start:\n$p_A={}$\n$p_B={}$'.format(p1[0],p2[0]),color='green',size=10)
plt.text(p1[-1]+0.0,p2[-
1]+0.02,'end:\n$p_A={:.3f}$\n$p_B={:.3f}$'.format(p1[-1],p2[-
1]),color='green',size=10)
plt.show()
Page | 73
defplot_coin_likelihood(X,n,path=[]):
"""
Plots the coin likelihood wrtpA, pB using 2D contours.
Args:
X: np.array of shape (n_trials,), dtype int,
the observations (number of heads at each trial).
n: int, total number of tosses per trial.
path: (optional) A list of tuple of (pA, pB) that are visited in the
EM iterations.
Visualized as line segments if not empty.
Returns:
Shows the figure and returns None.
"""
grid_fn=lambdapA,pB:log_likelihood(X,n,(lam,pA,pB))
returnplot_coin_function(
grid_fn=grid_fn,
title=r"Log-Likelihood $\log p(\mathcal{X}| \{p_A, p_B\})$",
path=path
)
defplot_coin_ELBO(X,n,Q,path=None):
"""
Plots the coin ELBO wrtpA, pB using 2D contours.
Reference: https://nbviewer.jupyter.org/github/eecs445-f16/umich-eecs445-
f16/blob/master/handsOn_lecture17_clustering-mixtures-
em/handsOn_lecture17_clustering-mixtures-em.ipynb#Problem:-implement-EM-for-
Coin-Flips
Args:
X: np.array of shape (n_trials,), dtype int,
the observations (number of heads at each trial).
n: int, total number of tosses per trial.
Q: np.array of shape (n_trials, 2), dtype float,
the hidden posterior q(z) (z = A, B) computed in the E-step.
path: (optional) A list of tuple of (pA, pB) that are visited in the
EM iterations.
Visualized as line segments if not empty.
Returns:
Shows the figure and returns None.
"""
grid_fn=lambdapA,pB:ELBO(X,n,Q,(lam,pA,pB))
returnplot_coin_function(grid_fn=grid_fn,title="ELBO",path=path)
In [3]:
Page | 74
p1=0.6# parameter: pA
p2=0.5# parameter: pB
n_trials=len(X)# number of trials
n_iters=10# number of EM iterations
path=[(p1,p2)]
print('Init: theta = ')
print(p1,p2)
foriinrange(n_iters):
print(f'==========EM Iter: {i+1}==========')
# E-step
q=np.zeros([n_trials,2])
fortrialinrange(n_trials):
x=X[trial]
q[trial,0]=lam*binomial(x,n,p1)
q[trial,1]=(1-lam)*binomial(x,n,p2)
q[trial,:]/=np.sum(q[trial,:])
# M-step
p1=sum((np.array(X)/n)*q[:,0])/sum(q[:,0])
p2=sum((np.array(X)/n)*q[:,1])/sum(q[:,1])
path.append([p1,p2])
plot_coin_likelihood(X,n,path)
plot_coin_ELBO(X,n,q,path)
Init: theta =
0.6 0.5
==========EM Iter: 1==========
E-step: q(z) =
[[0.44914893 0.55085107]
[0.80498552 0.19501448]
[0.73346716 0.26653284]
[0.35215613 0.64784387]
[0.64721512 0.35278488]]
M-step: theta =
0.713012235400516 0.5813393083136625
==========EM Iter: 2==========
E-step: q(z) =
[[0.29581932 0.70418068]
[0.81151045 0.18848955]
[0.70642201 0.29357799]
[0.19014454 0.80985546]
[0.57353393 0.42646607]]
M-step: theta =
0.7452920360819947 0.5692557501718727
==========EM Iter: 3==========
Page | 75
E-step: q(z) =
[[0.21759232 0.78240768]
[0.86984852 0.13015148]
[0.75115408 0.24884592]
[0.11159059 0.88840941]
[0.57686907 0.42313093]]
M-step: theta =
0.7680988343673212 0.5495359141383477
==========EM Iter: 4==========
E-step: q(z) =
[[0.16170261 0.83829739]
[0.91290493 0.08709507]
[0.79426368 0.20573632]
[0.06633343 0.93366657]
[0.58710461 0.41289539]]
M-step: theta =
0.7831645842999738 0.5346174541475203
==========EM Iter: 5==========
E-step: q(z) =
[[0.12902034 0.87097966]
[0.93537835 0.06462165]
[0.82155069 0.17844931]
[0.04499518 0.95500482]
[0.59420506 0.40579494]]
M-step: theta =
0.7910552458637526 0.5262811670299318
==========EM Iter: 6==========
E-step: q(z) =
[[0.11354215 0.88645785]
[0.94527968 0.05472032]
[0.83523177 0.16476823]
[0.03622405 0.96377595]
[0.59798906 0.40201094]]
M-step: theta =
0.7945325379936995 0.5223904375178746
==========EM Iter: 7==========
E-step: q(z) =
[[0.10708809 0.89291191]
[0.94933575 0.05066425]
[0.8412686 0.1587314 ]
[0.03280939 0.96719061]
[0.59985308 0.40014692]]
M-step: theta =
0.7959286672497985 0.5207298780860258
==========EM Iter: 8==========
E-step: q(z) =
[[0.10455728 0.89544272]
[0.95095406 0.04904594]
[0.84378118 0.15621882]
[0.0315032 0.9684968 ]
[0.60074317 0.39925683]]
M-step: theta =
0.7964656379225262 0.5200471890029875
==========EM Iter: 9==========
E-step: q(z) =
Page | 76
[[0.10359135 0.89640865]
[0.95159456 0.04840544]
[0.84480318 0.15519682]
[0.03100653 0.96899347]
[0.60115794 0.39884206]]
M-step: theta =
0.7966683078984395 0.5197703896938073
==========EM Iter: 10==========
E-step: q(z) =
[[0.10322699 0.89677301]
[0.95184768 0.04815232]
[0.8452156 0.1547844 ]
[0.03081812 0.96918188]
[0.60134719 0.39865281]]
M-step: theta =
0.7967441494752118 0.5196586622041123
Page | 77
Result:
Thus, the python program for EM for Bayesian networks was executed successfully.
Page | 78
Ex.No:11
Build Simple Neural Network Models
D at e:
Aim:
Procedure:
Step 1 - Import basic libraries.
Step 2 - Importing the dataset.
Step 3 - Data preprocessing.
Step 4 - Training the model.
Step 5 - Testing and evaluation of the model.
Step 6 - Visualizing the model.
Program:
Importing libraries
import warnings
warnings.filterwarnings('ignore')
import numpy as np
import matplotlib.pyplot as plt
import keras
from keras.datasets import mnist
from keras.models import Sequential,model_from_json
from keras.layers import Dense
from keras.optimizers import RMSprop
import pylab as plt
Using TensorFlow backend.
Keras is the deep learning library that helps you to code Deep Neural Networks with fewer lines
of code
Page | 79
Import data
batch_size = 128
num_classes = 10
epochs = 2
# Normalize to 0 to 1 range
x_train /= 255
x_test /= 255
Visualize Data
print("Label:",y_test[2:3])
plt.imshow(x_test[2:3].reshape(28,28), cmap='gray')
plt.show()
Page | 80
Design a model
first_layer_size = 32
model = Sequential()
model.add(Dense(first_layer_size, activation='sigmoid', input_shape=(784,)))
model.add(Dense(32, activation='sigmoid'))
model.add(Dense(32, activation='sigmoid'))
model.add(Dense(num_classes, activation='softmax'))
model.summary()
=================================================================
=================================================================
Non-trainable params: 0
w = []
for layer in model.layers:
weights = layer.get_weights()
w.append(weights)
layer1 = np.array(w[0][0])
Page | 81
print("Shape of First Layer",layer1.shape)
print("Visualization of First Layer")
fig=plt.figure(figsize=(12, 12))
columns = 8
rows = int(first_layer_size/8)
for i in range(1, columns*rows +1):
fig.add_subplot(rows, columns, i)
plt.imshow(layer1[:,i-1].reshape(28,28),cmap='gray')
plt.show()
Compiling a Model
model.compile(loss='categorical_crossentropy',
optimizer=RMSprop(),
metrics=['accuracy'])
Training
# Write the Training input and output variables, size of the batch, number of epochs
history = model.fit(x_train,y_train,
batch_size=batch_size,
epo chs=epo chs,
verbose=1)
Epoch 1/2
Epoch 2/2
Testing
Page | 82
print('Test loss:', score[0])
print('Test accuracy:', score[1])
Test loss: 0.46789013657569883
w = []
for layer in model.layers:
weights = layer.get_weights()
w.append(weights)
layer1 = np.array(w[0][0])
print("Shape of First Layer",layer1.shape)
print("Visualization of First Layer")
fig=plt.figure(figsize=(12, 12))
columns = 8
rows = int(first_layer_size/8)
for i in range(1, columns*rows +1):
fig.add_subplot(rows, columns, i)
plt.imshow(layer1[:,i-1].reshape(28,28),cmap='gray')
plt.show()
Take away
Each of the nodes will look for a specific pattern in the input
A node will get activated if input is similar to the feature it looks for
Prediction
Page | 83
# Write the index of the test sample to test
prediction = model.predict(x_test[])
prediction = prediction[0]
print('Prediction\n',prediction)
print('\nThresholded output\n',(prediction>0.5)*1)
Ground truth
plt.imshow(x_test[].reshape(28,28),cmap='gray')
plt.show()
User Input
# Load library
import cv2
import numpy as np
from matplotlib import pyplot as plt
# Show image
plt.imshow(image_rgb), plt.axis("off")
plt.show()
Page | 84
image = cv2.imread('.jpg', cv2.IMREAD_GRAYSCALE)
image_resized = cv2.resize(image, (28, 28))
# Show image
plt.imshow(image_resized, cmap='gray'), plt.axis("off")
plt.show()
Prediction
prediction = model.predict(image_resized.reshape(1,784))
print('Prediction Score:\n',prediction[0])
thresholded = (prediction>0.5)*1
print('\nThresholded Score:\n',thresholded[0])
print('\nPredicted Digit:\n',np.where(thresholded == 1)[1][0])
Prediction Score:
Thresholded Score:
[0 0 0 1 0 0 0 0 0 0]
Predicted Digit:
Saving a model
Page | 85
# Write the file name of the model
model.save_weights("model.h5")
print("Saved model to disk")
Saved model to disk
Loading a model
loaded_model.load_weights("model.h5")
print("Loaded model from disk")
Loaded model from disk
Retraining a model
Page | 86
history = loaded_model.fit(x_train, y_train, batch_size=batch_size, epochs=epochs, verbose=1)
score = loaded_model.evaluate(x_test, y_test, verbose=0)
print('Test loss:', score[0])
print('Test accuracy:', score[1])
Epoch 1/2
Epoch 2/2
Saving a model and resuming the training later is the great relief in training large neural
networks !
model = Sequential()
model.add(Dense(8, activation='sigmoid', input_shape=(784,)))
model.add(Dense(8, activation='sigmoid'))
model.add(Dense(num_classes, activation='softmax'))
model.summary()
model.compile(loss='categorical_crossentropy',
optimizer=RMSprop(),
metrics=['accuracy'])
Page | 87
score = model.evaluate(x_test, y_test, verbose=0)
print('Test loss:', score[0])
print('Test accuracy:', score[1])
=================================================================
=======================================================
Epoch 1/2
Page | 88
# Write your code here
model.add(Dense(8, activation='tanh'))
model.add(Dense(8, activation='linear'))
model.add(Dense(8, activation='hard_sigmoid'))
Tips
first_layer_size = 8
model = Sequential()
model.add(Dense(first_layer_size, activation='sigmoid', input_shape=(784,)))
model.add(Dense(32, activation='sigmoid'))
model.add(Dense(num_classes, activation='softmax'))
model.summary()
model.compile(loss='categorical_crossentropy',
optimizer=RMSprop(),
metrics=['accuracy'])
Page | 89
batch_size=batch_size,
epo chs=epo chs,
verbose=1,
validation_data=(x_test, y_test))
score = model.evaluate(x_test, y_test, verbose=0)
print('Test loss:', score[0])
print('Test accuracy:', score[1])
w = []
for layer in model.layers:
weights = layer.get_weights()
w.append(weights)
layer1 = np.array(w[0][0])
print("Shape of First Layer",layer1.shape)
print("Visualization of First Layer")
model = Sequential()
model.add(Dense(4, activation='relu', input_shape=(784,)))
model.add(Dense(num_classes, activation='softmax'))
Page | 90
model.add(Dense(num_classes, activation='softmax'))
model.summary()
model.compile(loss='categorical_crossentropy',
optimizer=RMSprop(),
metrics=['accuracy'])
=================================================================
=======================================================
Epoch 1/2
Page | 91
Epoch 2/2
Result:
Thus, the python program for simple NN models was executed successfully.
Page | 92
Ex.No:12
Build Deep Learning Neural Network models
D at e:
Aim:
Procedure:
Step 1 - Import basic libraries.
Step 2 - Importing the dataset.
Step 3 - Data preprocessing.
From keras library we are going to use image preprocessing task, to normalize
the image pixel values in between 0 to 1.
Model is imported to load various Neural Network models such as Sequential.
We are going to use Stochastic Gradient Descent (SGD) as a optimizer
Keras layers such as Dense, Flatten, Conv2D and MaxPooling is used to
implement the CNN model.
Program:
Convolution Neural Networks are mainly use for large size input data such as Image data.
Page | 93
fromkerasimportmodels
importmatplotlib.pyplotasplt
fromkeras.preprocessingimportimage
fromkeras.preprocessing.imageimportImageDataGenerator
fromkeras.modelsimportModel
fromkeras.optimizersimportSGD
fromkerasimportlayers
fromkeras.layersimportDense, Flatten, Conv2D, MaxPooling2D
fromkerasimportInput
test_generator=test_datagen.flow_from_directory(
'cellimage/test/',
target_size=(64, 64),
batch_size=1,
class_mode='categorical',
shuffle=False)
We are going to use 2 convolution layers with 3*3 filer and relu as an activation function
Then max pooling layer with 2*2 filter is used
After that we are going to use Flatten layer
Then Dense layer is used with relu function
In the output layer softmax function is used with 4 neurons as we have four class dataset.
model.summary() is used to check the overall architecture of the model with number of
learnable parameters in each
Page | 95
max_pooling2d_2 (MaxPooling2 (None, 14, 14, 64) 0
B2. Compile the model with SGD(Stochastic Gradient Descent) and train it with 10 epochs.
In [4]:
sgd=SGD(lr=0.01,decay=1e-6, momentum=0.9, nesterov=True)
# We are going to use accuracy metrics and cross entropy loss as performance parameters
model.compile(sgd, loss='categorical_crossentropy', metrics=['acc'])
# Train the model
history=model.fit_generator(train_generator,
steps_per_epoch=train_generator.samples/train_generator.batch_size,
epochs=10,
validation_data=validation_generator,
va lidat io n_st eps=validat io n_generator. samples/ va lidat io n_gener ator. bat ch_size,
verbose=1)
WARNING:tensorflow:From C:\Users\sozhan\Anaconda2\lib\site-
packages\keras\backend\tensorflow_backend.py:2885: calling reduce_sum_v1 (from
tensorflow.python.ops.math_ops) with keep_dims is deprecated and will be removed in a future
version.
Instructions for updating:
keep_dimsis deprecated, use keepdims instead
WARNING:tensorflow:From C:\Users\sozhan\Anaconda2\lib\site-
packages\tensorflow\python\ops\math_ops.py:3066: to_int32 (from
tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.cast instead.
WARNING:tensorflow:Variable *= will be deprecated. Use `var.assign(var * other)` if you want
assignment to the variable value or `x = x * y` if you want a new python Tensor object.
Epoch 1/10
139/138 [==============================] - 51s 367ms/step - loss: 0.9863 - acc:
0.6043 - val_loss: 0.6350 - val_acc: 0.7591
Epoch 2/10
Page | 96
139/138 [==============================] - 47s 336ms/step - loss: 0.5411 - acc:
0.7947 - val_loss: 0.4170 - val_acc: 0.8441
Epoch 3/10
139/138 [==============================] - 48s 343ms/step - loss: 0.4594 - acc:
0.8278 - val_loss: 0.6648 - val_acc: 0.7307
Epoch 4/10
139/138 [==============================] - 46s 328ms/step - loss: 0.3686 - acc:
0.8642 - val_loss: 0.3508 - val_acc: 0.8709
Epoch 5/10
139/138 [==============================] - 46s 333ms/step - loss: 0.3162 - acc:
0.8855 - val_loss: 0.3843 - val_acc: 0.8661
Epoch 6/10
139/138 [==============================] - 48s 349ms/step - loss: 0.2712 - acc:
0.9039 - val_loss: 0.3046 - val_acc: 0.8929
Epoch 7/10
139/138 [==============================] - 46s 332ms/step - loss: 0.2601 - acc:
0.9005 - val_loss: 0.2986 - val_acc: 0.9039
Epoch 8/10
139/138 [==============================] - 46s 332ms/step - loss: 0.2168 - acc:
0.9214 - val_loss: 0.3035 - val_acc: 0.8945
Epoch 9/10
139/138 [==============================] - 44s 316ms/step - loss: 0.2203 - acc:
0.9213 - val_loss: 0.2454 - val_acc: 0.9134
Epoch 10/10
139/138 [==============================] - 54s 389ms/step - loss: 0.2047 - acc:
0.9308 - val_loss: 0.2947 - val_acc: 0.9008
B3. Saving the model
In [5]:
model.save('cnn_classification.h5')
B4. Loading the Model
In [6]:
model=models.load_model('cnn_classification.h5')
print(model)
<keras.models.Sequential object at 0x0000005BA45C1518>
B5. Saving weignts of model
In [7]:
model.save_weights('cnn_classification.h5')
B6. Loading the Model weights
In [8]:
model.load_weights('cnn_classification.h5')
Page | 97
C. Performance Measures *Now we are going to plot the accuracy and loss *
In [9]:
train_acc=history.history['acc']
val_acc=history.history['val_acc']
train_loss=history.history['loss']
val_loss=history.history['val_loss']
print(train_acc)
print(val_acc)
print(train_loss)
print(val_loss)
[0.6062246278755075, 0.7947677041584165, 0.8272440234551195, 0.8637798827244023,
0.8858818223361341, 0.9039242219484008, 0.9012178620562986, 0.921515561596574,
0.92106450157871, 0.9305367613892648]
[0.7590551185795641, 0.8440944886583043, 0.7307086613234572, 0.8708661422016114,
0.8661417322834646, 0.8929133858267716, 0.9039370078740158, 0.8944881889763779,
0.9133858267716536, 0.9007874015748032]
[0.9849027604415818, 0.5411215644190319, 0.4603504691849327, 0.3689646118656171,
0.3152961805429231, 0.27086145290663827, 0.25899119408323146, 0.21610905267684696,
0.220770583645578, 0.20524998439873293]
[0.6350463369230586, 0.4170022392836143, 0.664773770016948, 0.3508223737318685,
0.38431625602048214, 0.30464723109905645, 0.29862876056920823, 0.3035450204385547,
0.2454165400482538, 0.29468511782997237]
In [10]:
epo chs=range( len(tr ain_acc))
plt.plot(epochs, train_acc, 'b', label='Training Accuracy')
plt.plot(epochs, val_acc, 'r', label='Validation Accuracy')
plt.title('Training and Validation Accuracy')
plt.legend()
plt.figure()
plt.show()
Page | 98
<Figure size 432x288 with 0 Axes>
Model Testing
In [11]:
# Get the filenames from the generator
fnames=test_generator.filenames
# Get the ground truth from generator
Page | 99
ground_truth=test_generator.classes
errors=np.where(predicted_classes!=ground_truth)[0]
print("No of errors = {}/{}".format(len(errors),test_generator.samples))
319/319 [==============================] - 4s 14ms/step
No of errors = 29/319
Assignemnt
*You have to load the weights of previous model and with the help of previous weights try to
create a CNN model with one more convolution layers. You have to train only after the newly
added convolution layers of the neural network. *
Page | 100
max_pooling2d_4 (MaxPooling2 (None, 31, 31, 128) 0
Page | 101
conv2d_4 (Conv2D) (None, 62, 62, 128) 3584
In [ ]:
# Here we are changing the learning rate from 0.001 to 0.01
Page | 102
C. Performance Measures *Now we are going to plot
Page | 103
Model Testing
In [21]:
# Get the filenames from the generator
fnames=test_generator.filenames
# Get the ground truth from generator
ground_truth=test_generator.classes
# Get the label to class mapping from the generator
la be l2 index=t est_gener ator.class_ind ices
# Getting the mapping from class index to class label
idx2label=dict((v,k) fork,vinlabel2index.items())
# Get the predictions from the model using the generator
predictions=model.predict_generator(test_generator,
steps=test_generator.samples/test_generator.batch_size,verbose=1)
predicted_classes=np.argmax(predictions,axis=1)
errors=np.where(predicted_classes!=ground_truth)[0]
print("No of errors = {}/{}".format(len(errors),test_generator.samples))
319/319 [==============================] - 4s 12ms/step
No of errors = 29/319
Result:
Thus, the python program for build deep learning NN models was executed successfully.
Page | 104