LAB MANUAL CST (Soft Computing) 12-02-2019
LAB MANUAL CST (Soft Computing) 12-02-2019
LAB MANUAL CST (Soft Computing) 12-02-2019
Semester-VIII
1
Institute Vision, Mission
Vision
To be a leader in engineering and technology education, a research centre of
global standards to provide valuable resources for industry and society through
development of competent technical human resources.
Mission
1) To undertake collaborative research projects that offer opportunities for consistent
interaction with industries.
2) To organize teaching learning programs to facilitate the development of
competent and committed professionals for practice, research and academics.
3) To develop technocrats of national & international stature committed to the task
of nation building.
2
Department Vision & Mission
Vision
To be a centre of academic excellence and research in the field of Computer
Science and Technology by imparting knowledge to students and facilitating research
activities that cater the needs of industries and society.
Mission
3
Index
Page
Sr. No. Contents No.
1. List of Experiments
Experiment No. 1
4.
Experiment No. 2
5.
Experiment No. 3
6.
Experiment No. 4
7.
Experiment No. 5
8.
Experiment No. 6
9.
4
Experiment No. 7
10.
Experiment No. 8
11.
Experiment No. 9
12.
Experiment No. 10
13.
14 Experiment No. 11
15 Experiment No. 12
List of Experiments
5
classical sets
CO1 Demonstrate different soft computing techniques like Genetic Algorithms, Fuzzy
Logic, Neural Networks and their combination.
CO2 Design and implement computing systems by using appropriate Artificial Neural
Network and tools.
CO4 Apply the concepts of Fuzzy Logic, Various fuzzy systems and their functions to
real time systems.
CO5 Analyze the genetic algorithms and their applications to solve engineering
optimization problems
CO6 Apply soft computing techniques to solve engineering or real life problems
6
7
Experiment Plan
8
Study and Evaluation Scheme
Cours
e
Course Name Teaching Scheme Credits Assigned
Code
Examination
Course Code Course Name Scheme
9
10
Experiment No. : 1
11
Title: Write a program to implement logical XOR.
Objectives: Understand the operation of logical XOR, algebraic expressions, symbols and
working of symbols in programmatically.
Outcomes: The students will be able to learn the concept of logical XOR, algebraic
expressions and the detail information of symbols.
Input: input is 0, 1
Output: The output of this experiment is if there are two true and two false values the output
become false (0) and if two operations one is true (1) and another is false the output become
true as shown in the truth table.
Theory
The XOR gate (sometimes EOR, or EXOR and pronounced as Exclusive OR) is a
digital logic gate that gives a true (1 or HIGH) output when the number of true inputs is odd.
An XOR gate implements an exclusive or; that is, a true output results if one, and only one,
of the inputs to the gate is true. If both inputs are false (0/LOW) and both are true, a false
output results. XOR represents the inequality function, i.e., the output is true if the inputs are
not alike otherwise the output is false. A way to remember XOR is "one or the other but not
both".
INPUT OUTPUT
A Y
B
0 0 0
0 1 1
1 0 1
1 1 0
Table: Truth
table
12
Algorithm/Pseudo code:
# Program for implementation of XOR gate
import pandas as pd
test_inputs = [(0, 0), (0, 1), (1, 0), (1, 1)]
correct_outputs = [False, True, True, False]
outputs = []
print(output_frame.to_string(index=False))
Output:
Input A Input B Output
0 0 0
0 1 1
1 0 1
1 1 0
Example:
INPUT OUTPUT
A B Y
0 0 0
0 1 1
1 0 1
1 1 0
Conclusion: Thus we have studied the concept of logical XOR, algebraic expressions and
Symbols working in detail.
13
Experiment No. : 2
14
Title: Write a program to implement logical AND using McCulloch Pitt’s neuron model
Aim: To write a program to implement logical AND using McCulloch Pitt’s neuron model.
Objectives: Know the concept of logical AND using McCulloch Pitt’s neuron model with
example.
Outcomes: The students will be able to learn logical AND operations truth table and
theoretically using McCulloch Pits model.
Input: in=A+B.
Output:s=f (yin)
Theory:
Every neuron model consists of a processing element with synaptic input connection and a
single input. The "neurons" operated under the following assumptions:-i. They are binary
devices (Vi = [0,1]) ii. Each neuron has a fixed threshold, theta values. iii. The neuron
receives inputs from excitatory synapses, all having identical weights. iv. Inhibitory inputs
have an absolute veto power over any excitatory inputs. v. At each time step the neurons are
simultaneously (synchronously) updated by summing the weighted excitatory inputs and
setting the output (Vi) to 1 if the sum is greater than or equal to the threshold and if the
neuron receives no inhibitory input.
AND GATE:
It is a logic gate that implements conjunction.
Whenever both the inputs are high then only output
will be high (1) otherwise low (0).
A B Y
0 0 0
0 1 0
1 0 0
1 1 1
Table: Truth
table
15
IMPLIMENTATION OF MCCULLOCH PITTS
MODEL:
Algorithm/Pseudo code:
import numpy as np
import pandas as pd
def cal_output_and(threshold=0):
weight1 = 1
weight2 = 1
bias = 0
return threshold
t = cal_output_and()
16
Output:
all correct for threshold 2.
Conclusion: Thus we have studied the logical AND operation with example using McCulloch
Pitts model in detail.
17
Experiment No. : 3
18
Title: Write a program to implement logical XOR using McCulloch Pitt’s neuron model
Aim: To Write a program to implement logical XOR using McCulloch Pitt’s neuron model.
Objectives: To understand in details of logical XOR using McCulloch Pitt’s neuron model.
with example
Outcomes: The students will be able to learn the concept of logical XOR operation,
architecture of logical XOR and logical XOR using McCulloch Pitts model.
Input: input is 0, 1
Output: The output of this experiment is it takes two inputs “x1” and “x2” 0 is false and 1 is
true, when both inputs are false (0 0) the output of the y is false and if one is true and another
is false input the output of the Y is true (1).
Popular repositories
Theory:
McCULLOCH PITTS MODEL:
Every neuron model consists of a processing element with synaptic input connection and a
single input. The "neurons" operated under the following assumptions:-i. They are binary
devices (Vi = [0,1]) ii. Each neuron has a fixed threshold, theta values. iii. The neuron
receives inputs from excitatory synapses, all having identical weights. iv. Inhibitory inputs
have an absolute veto power over any excitatory inputs. v. At each time step the neurons are
simultaneously (synchronously) updated by summing the weighted excitatory inputs and
setting the output (Vi) to 1 if the sum is greater than or equal to the threshold and if the
neuron receives no inhibitory input.
XOR GATE
It is sometimes called as XOR gate or exclusive or gate. It gives a true output when the
number of true inputs is odd. If both the inputs are true and both are false then the output is
false. These are used to implement binary addition in computers. [7] The truth table and
symbol are shown below.
X1 X2 Y
0 0 0
0 1 1
1 0 1
1 1 0
19
IMPLEMENTATION OF MCCULLOCH PITTS MODEL:
Algorithm/Pseudo code:
def step_function(ip,threshold=0):
if ip >= threshold:
return 1
else:
return 0
def NOT_gate_ip(x):
w = -1
b = .5
#threshold = cal_output_not()
return cal_gate(x, w, b)
def OR_ip(x):
w = np.array([1, 1])
b = -0.5
return cal_gate(x, w, b)
def Logical_XOR(x):
A = AND_gate_ip(x)
C = NOT_gate_ip(A)
B = OR_ip(x)
20
AND_output = np.array([C, B])
output = AND_gate_ip(AND_output)
return output
input=[(0, 0), (0, 1), (1, 0), (1, 1)]
for i in input:
print(Logical_XOR(i))
Output:
Input 1 Input 2 Activation Output Is Correct
0 0 0 0 Yes
1 0 1 1 Yes
2 1 0 1 Yes
3 1 1 0 Yes
Conclusion: Thus we have studied the concept of logical XOR using McCulloch Pitts neuron
model with example.
21
Experiment No. : 4
22
Title: Write a program to implement logical AND using Perceptron network
Outcomes: The students will be able to learn the concept of logical AND using perceptron
network with definitions and examples.
Input: 0, 1
Output:0.1
Theory:
A Perceptron is an algorithm for supervised learning of binary classifiers. This algorithm enables
neurons to learn and processes elements in the training set one at a time.
23
The Perceptron algorithm learns the weights for the input signals in order to draw a linear
decision boundary.
This enables you to distinguish between the two linearly separable classes +1 and -1.
Perceptron is a function that maps its input “x,” which is multiplied with the learned weight
coefficient; an output value”f(x)”is generated.
Algorithm/Pseudo code:
def step_function(ip,threshold=0):
if ip >= threshold:
return 1
else:
return 0
def AND_gate_ip(x):
w = np.array([1, 1])
b = -1.5
24
#threshold = cal_output_or()
return cal_gate(x, w, b)
input=[(0, 0), (0, 1), (1, 0), (1, 1)]
for i in input:
print(AND_gate_ip(i))
Output:
0
0
0
1
Example:
We shall see explicitly how one can construct simple networks that perform NOT, AND, and OR. It is then a
well-known result from logic that we can construct any logical function from these three operations. The
resulting networks, however, will usually have a much more complex architecture than a simple
Perceptron. We generally want to avoid decomposing complex problems into simple logic gates, by
finding the weights and thresholds that work directly in a Perceptron architecture.
25
Conclusion: Hence we studied that implementation of logical AND using perceptron network
26
Experiment No. : 5
27
Title: Write a program to implement Adaline network
Outcomes: The students will be able to learn the concept of Adaline Network.
Theory:
ADALINE (Adaptive Linear Neuron or later Adaptive Linear Element) is an early single-layer
artificial neural network and the name of the physical device that implemented this network. The
network uses memistors. It was developed by Professor Bernard Widow and his graduate student
Ted Hoff at Stanford University in 1960. It is based on the McCulloch–Pitts neuron. It consists
of a weight, a bias and a summation function.
The difference between Adaline and the standard (McCulloch–Pitts) perceptron is that in the
learning phase, the weights are adjusted according to the weighted sum of the inputs (the net). In
the standard perceptron, the net is passed to the activation (transfer) function and the function's
output is used for adjusting the weights.
Algorithm/Pseudo code:
import numpy as np
import matplotlib.pyplot as plt
import math
LEARNING_RATE = 0.5
def step(x):
if (x > 0):
28
return 1
else:
return -1;
INPUTS = np.array([[-1,-1,1],
[-1,1,1],
[1,-1,1],
[1,1,1] ])
OUTPUTS = np.array([[-1,1,1,1]]).T
WEIGHTS = np.array([[0],[0],[0]])
print("Random Weights {} before training".format(WEIGHTS))
errors=[]
ADALINE_OUTPUT = (input_item[0]*WEIGHTS[0]) +
(input_item[1]*WEIGHTS[1]) + (input_item[2]*WEIGHTS[2])
ADALINE_OUTPUT = step(ADALINE_OUTPUT)
errors.append(ERROR)
ADALINE_OUTPUT = step(ADALINE_OUTPUT)
ax = plt.subplot(111)
ax.plot(errors, label='Training Errors')
ax.set_xscale("log")
plt.title("ADALINE Errors (2,-2)")
plt.legend()
plt.xlabel('Error')
plt.ylabel('Value')
plt.show()
29
Example:
30
Experiment No. : 6
31
Title: Write a program to implement madaline network for XOR function
Outcomes: The students will be able to learn the concept of madaline Network.
Input=f (yin)
Output: to get network in which simple madaline consist of input layer m units
Theory:
32
5.3. Input the same training pattern
5.4. If reduce # errors, accept the correction; otherwise, restore the original weights
Algorithm/Pseudo code:
import numpy as np
import copy
import pandas as pd
import math
import matplotlib
import operator
import matplotlib.pyplot as plt
def Activation_function(val):
if val>=0:
return 1
else:
return -1
def Testing(mat_inputs_s,w11,w21,w12,w22,b1,b2,v1,v2,b3):
print("------------------------")
print("Testing for XOR GATE")
print("------------------------")
for i in range(len(mat_inputs_s)):
mat_inputs_x_i=list(mat_inputs_s[i].flat)
z_in_1=b1+mat_inputs_x_i[0]*w11+mat_inputs_x_i[1]*w21
z_in_2=b2+mat_inputs_x_i[0]*w12+mat_inputs_x_i[1]*w22
z1=Activation_function(z_in_1)
z2=Activation_function(z_in_2)
y_in=b3+z1*v1+z2*v2
y=Activation_function(y_in)
print("Input: "+ str(mat_inputs_x_i)+ " Output: "+str(y)+" (Value="+str(y_in)+")"
)
33
##XOR GATE => (2*AND NOT + OR)
def Madaline_DeltaRule(alpha):
print("------------------------")
print("MADALINE FOR XOR GATE with alpha ="+str(alpha))
print("------------------------")
mat_inputs_s=np.matrix([[1,1],[1,-1],[-1,1],[-1,-1]])
mat_target_t=[-1,1,1,-1]
v1=v2=b3=0.5
w11=w21=b1=0
w12=w22=b2=0
iterations=0
w11=0.05
w21=0.2
w12=0.1
w22=0.2
b1=0.3
b2=0.15
while(True):
iterations+=1
prev_w11=copy.deepcopy(w11);prev_w21=copy.deepcopy(w21);prev_w12=copy.deepcopy(w1
2);prev_w22=copy.deepcopy(w22)
for i in range(len(mat_inputs_s)):
mat_inputs_x_i=list(mat_inputs_s[i].flat)
z_in_1=b1+mat_inputs_x_i[0]*w11+mat_inputs_x_i[1]*w21
z_in_2=b2+mat_inputs_x_i[0]*w12+mat_inputs_x_i[1]*w22
z1=Activation_function(z_in_1)
z2=Activation_function(z_in_2)
y_in=b3+z1*v1+z2*v2
y=Activation_function(y_in)
##Error
if(mat_target_t[i]!=y):
if(mat_target_t[i]==1):
if(z_in_1>z_in_2):
34
else:
if(z_in_1>=0):
#Update Adaline z1
b1= b1+ alpha*(-1-z_in_1)
w11=w11+alpha*(-1-z_in_1)*mat_inputs_x_i[0]
w21=w21+alpha*(-1-z_in_1)*mat_inputs_x_i[1]
if(z_in_2>=0):
#Update Adaline z2
b2= b2+ alpha*(-1-z_in_2)
w12=w12+alpha*(-1-z_in_2)*mat_inputs_x_i[0]
w22=w22+alpha*(-1-z_in_2)*mat_inputs_x_i[1]
35
print("w22 = "+ str(w22))
print("b2 = "+ str(b2))
print("------------------------")
##Alpha=0.05
Madaline_DeltaRule(0.05)
##Alpha=0.1
Madaline_DeltaRule(0.1)
##Alpha=0.5
Madaline_DeltaRule(0.5)
Output:
------------------------
Weights till now:
------------------------
Adaline Z1:
w11 = 1.3203125000000002
w21 = -1.3390625
b1 = -1.0671875000000002
------------------------
Adaline Z2:
w12 = -1.2921875
w22 = 1.2859375
b2 = -1.0765624999999999
------------------------
------------------------
Stopping Condition satisfied. Weights stopped changing.
Total Iterations = 3
------------------------
Final Weights:
------------------------
Adaline Z1:
w11 = 1.3203125000000002
w21 = -1.3390625
b1 = -1.0671875000000002
------------------------
Adaline Z2:
w12 = -1.2921875
w22 = 1.2859375
b2 = -1.0765624999999999
------------------------
------------------------
Testing for XOR GATE
------------------------
Input: [1, 1] Output: -1 (Value=-0.5)
Input: [1, -1] Output: 1 (Value=0.5)
Input: [-1, 1] Output: 1 (Value=0.5)
Input: [-1, -1] Output: -1 (Value=-0.5)
36
Example:
37
Experiment No. : 7
38
Title: Write a program to implement Back propagation network
Objectives: To understand the and implement the concept of Ex-OR operation using Back
Propagation network
Outcomes: The students will be able to learn the EX-OR operation by using the technique of
Back Propagation Network
Theory:
Back propagation:
The back propagation algorithm was originally introduced in the 1970s, but its importance wasn't
fully appreciated until a famous 1986 paper by David Rumelhart, Geoffrey Hinton, and Ronald
Williams.
The Back propagation algorithm looks for the minimum value of the error function in weight
space using a technique called the delta rule or gradient descent. The weights that minimize the
error function is then considered to be a solution to the learning problem.
Algorithm/Pseudo code:
import numpy as np
def nonlin(x,deriv=False):
if(deriv==True):
return x*(1-x)
return 1/(1+np.exp(-x))
X = np.array([[0,0,1],
[0,1,1],
[1,0,1],
39
[1,1,1]])
y = np.array([[0],
[1],
[1],
[0]])
np.random.seed(1)
syn0 = 2*np.random.random((3,4)) - 1
syn1 = 2*np.random.random((4,1)) - 1
for j in range(60000):
if (j% 10000) == 0:
print("Error:" + str(np.mean(np.abs(k2_error))))
k2_delta = k2_error*nonlin(k2,deriv=True)
k1_error = k2_delta.dot(syn1.T)
syn1 += k1.T.dot(k2_delta)
syn0 += k0.T.dot(k1_delta)
Output:
Error:0.4964100319027255
Error:0.008584525653247157
Error:0.0057894598625078085
Error:0.004629176776769985
Error:0.0039587652802736475
Error:0.003510122567861678
40
We then squash it using the logistic function to get the output of :
We repeat this process for the output layer neurons, using the output from the hidden layer
neurons as inputs.
41
Experiment No. : 8
42
Title: Write a program to implement the various primitive operations of classical sets
Aim: To write a program to implement the various primitive operations of classical sets.
Objectives: To understand the perform various numerical operation such as Union, Insert,
Delete, Complement, Differences
Outcomes: The students will be able to the various primitive operations of classical sets
Output: Get the output of performing various numerical operation on given set A & B
Theory:
Set is defined as collection of objects which share certain characters. A classical set is collection
of distance object. E.g. User may be defining classical sets of negative integers and a set of
students with passive values. Each individual entity is a set is called a member of or an element
of the set classical into two group members and non members
● Commutatively A ∪ B = B ∪ A A ∩ B = B ∩ A
● Associatively A ∪ (B ∪ C) = (A ∪ B) ∪ C A ∩ (B ∩ C) = (A ∩ B) ∩ C
● Distributive A ∪ (B ∩ C) = (A ∪ B) ∩ (A ∪ C) A ∩ (B ∪ C) = (A ∩ B) ∪ (A ∩ C) (2.7)
● Idempotency A ∪ A = A A ∩ A = A
● Identity A ∪ ∅ = A A ∩ X = A A ∩ ∅ = ∅ A ∪ X = X
● Transitivity If A ⊆ B and B ⊆ C, then A ⊆ C
Algorithm/Pseudo code:
print("Union :", A | B)
print("Difference :", A - B)
43
print("Symmetric difference :", A ^ B)
Output:
Union : {0, 1, 2, 3, 4, 5, 6, 8}
Intersection : {2, 4}
Difference : {0, 8, 6}
Symmetric difference : {0, 1, 3, 5, 6, 8}
Conclusion: Thus we have studied the classical sets and its primitive operations.
44
Experiment No. : 9
45
Title: Write a program to implement various primitive operations on fuzzy sets with dynamic
Components
Aim: To write a program to implement various primitive operations on fuzzy sets with dynamic
Components.
Objectives: To understand the perform various Fuzzy sets operation such as Union, Insert,
Delete, Complement, Differences
Outcomes: The students will be able to perform various Fuzzy sets operation such as Union,
Insert, Delete, Complement, Differences
Output: Get the output of performing various fuzzy set operation on given set A & B
Theory:
A fuzzy set operation is an operation on fuzzy sets. These operations are generalization of crisp
set operations. There is more than one possible generalization. The most widely used operations
are called standard fuzzy set operations. There are three operations: fuzzy complements, fuzzy
intersections, and fuzzy unions.
Admits gradation such as all tones between black and white. A fuzzy set has a graphical
description that expresses how the transition from one to another takes place. This graphical
description is called a membership function.
Algorithm/Pseudo code:
%Program for calculating union, intersection and complement for given fuzzy
%set
import numpy as np
class FuzzySet:
def __init__(self, iterable: any):
self.f_set = set(iterable)
self.f_list = list(iterable)
self.f_len = len(iterable)
for elem in self.f_set:
if not isinstance(elem, tuple):
raise TypeError("No tuples in the fuzzy set")
if not isinstance(elem[1], float):
raise ValueError("Probabilities not assigned to elements")
46
# fuzzy set union
if len(self.f_set) != len(other.f_set):
raise ValueError("Length of the sets is different")
f_set = [x for x in self.f_set]
other = [x for x in other.f_set]
return FuzzySet([f_set[i] if f_set[i][1] > other[i][1] else other[i]
for i in range(len(self))])
def __invert__(self):
f_set = [x for x in self.f_set]
for indx, elem in enumerate(f_set):
f_set[indx] = (elem[0], float(round(1 - elem[1], 2)))
return FuzzySet(f_set)
@staticmethod
def max_min(array1: np.ndarray, array2: np.ndarray):
tmp = np.zeros((array1.shape[0], array2.shape[1]))
t = list()
for i in range(len(array1)):
for j in range(len(array2[0])):
for k in range(len(array2)):
t.append(round(min(array1[i][k], array2[k][j]), 2))
tmp[i][j] = max(t)
47
t.clear()
return tmp
def __len__(self):
self.f_len = sum([1 for i in self.f_set])
return self.f_len
def __str__(self):
return f'{[x for x in self.f_set]}'
def __iter__(self):
for i in range(len(self)):
yield self[i]
r = np.array([[0.6, 0.6, 0.8, 0.9], [0.1, 0.2, 0.9, 0.8], [0.9, 0.3, 0.4,
0.8], [0.9, 0.8, 0.1, 0.2]])
s = np.array([[0.1, 0.2, 0.7, 0.9], [1.0, 1.0, 0.4, 0.6], [0.0, 0.0, 0.5,
0.9], [0.9, 1.0, 0.8, 0.2]])
print(f"Max Min: of \n{r} \nand \n{s}\n:\n\n")
print(FuzzySet.max_min(r, s))
Output:
a -> [('x3', 0.0), ('x1', 0.5), ('x2', 0.7)]
b -> [('x1', 0.8), ('x3', 1.0), ('x2', 0.2)]
Fuzzy union:
[('x1', 0.8), ('x3', 1.0), ('x2', 0.7)]
Fuzzy intersection:
[('x3', 0.0), ('x1', 0.5), ('x2', 0.2)]
Fuzzy inversion of b:
[('x3', 0.0), ('x1', 0.2), ('x2', 0.8)]
Fuzzy inversion of a:
[('x1', 0.5), ('x3', 1.0), ('x2', 0.3)]
Fuzzy Subtraction:
[('x3', 0.0), ('x1', 0.2), ('x2', 0.7)]
Max Min: of
48
[[0.6 0.6 0.8 0.9]
[0.1 0.2 0.9 0.8]
[0.9 0.3 0.4 0.8]
[0.9 0.8 0.1 0.2]]
and
[[0.1 0.2 0.7 0.9]
[1. 1. 0.4 0.6]
[0. 0. 0.5 0.9]
[0.9 1. 0.8 0.2]]
1.The membership function of the fuzzy set of real numbers ”close to 1”, is
can be defined 7 as A(t) = exp(−β(t − 1)2 ) where β is a positive real number
Conclusion: Thus we have studied the implement various primitive operations on fuzzy sets
with Dynamic Components
49
Experiment No. : 10
50
Title: Write a program to maximize f(x1+x2) =4x1+3x2 using genetic algorithm
Theory:
A genetic algorithm is a search heuristic that is inspired by Charles Darwin’s theory of natural
evolution. This algorithm reflects the process of natural selection where the fittest individuals are
1. Initial population
2. Fitness function
3. Selection
4. Crossover
5. Mutation
1. Initial Population
The process begins with a set of individuals which is called a Population. Each individual is a
solution to the problem you want to solve.
51
In a genetic algorithm, the set of genes of an individual is represented using a string, in terms of
an alphabet. Usually, binary values are used (string of 1s and 0s). We say that we encode the
genes in a chromosome.
2. Fitness Function
3. Selection
The idea of selection phase is to select the fittest individuals and let them pass their genes to the
next generation. Two pairs of individuals (parents) are selected based on their fitness scores.
Individuals with high fitness have more chance to be selected for reproduction.
4. Crossover
Crossover is the most significant phase in a genetic algorithm. For each pair of parents to be
mated, a crossover point is chosen at random from within the genes.
5. Mutation
In certain new offspring formed, some of their genes can be subjected to a mutation with a low
random probability. This implies that some of the bits in the bit string can be flipped.
Algorithm/Pseudo code:
Algorithm/Pseudo code:
import numpy
52
# Selecting the best individuals in the current generation as parents for producing the offspring
of the next generation.
parents = numpy.empty((num_parents, pop.shape[1]))
for parent_num in range(num_parents):
max_fitness_idx = numpy.where(fitness == numpy.max(fitness))
max_fitness_idx = max_fitness_idx[0][0]
parents[parent_num, :] = pop[max_fitness_idx, :]
fitness[max_fitness_idx] = -99999999999
return parents
for k in range(offspring_size[0]):
# Index of the first parent to mate.
parent1_idx = k%parents.shape[0]
# Index of the second parent to mate.
parent2_idx = (k+1)%parents.shape[0]
# The new offspring will have its first half of its genes taken from the first parent.
offspring[k, 0:crossover_point] = parents[parent1_idx, 0:crossover_point]
# The new offspring will have its second half of its genes taken from the second parent.
offspring[k, crossover_point:] = parents[parent2_idx, crossover_point:]
return offspring
def mutation(offspring_crossover):
# Mutation changes a single gene in each offspring randomly.
for idx in range(offspring_crossover.shape[0]):
# The random value to be added to the gene.
random_value = numpy.random.uniform(-1.0, 1.0, 1)
53
offspring_crossover[idx, 4] = offspring_crossover[idx, 4] + random_value
return offspring_crossover
equation_inputs = [4,-2]
sol_per_pop = 4
num_parents_mating = 4
num_generations = 5
for generation in range(num_generations):
print("Generation : ", generation)
# Measing the fitness of each chromosome in the population.
fitness = cal_pop_fitness(equation_inputs, new_population)
54
offspring_size=(pop_size[0]-parents.shape[0], num_weights))
Output:
[[-2.19432529 1.70391184]
[ 0.47773586 -3.89955216]]
Generation : 0
Best result : 9.710047743186704
Generation : 1
Best result : 9.710047743186704
Generation : 2
Best result : 9.710047743186704
Generation : 3
Best result : 9.710047743186704
55
Generation : 4
Best result : 9.710047743186704
Best solution : [[[ 0.47773586 -3.89955216]]]
Best solution fitness : [9.71004774]
Conclusion: Thus we have studied the program to maximize f(x1+x2) =4x1+3x2 using genetic
algorithm
56
Experiment No. : 11
57
Title: Write a program to minimize f(x)=x2 using genetic algorithm
Theory:
A genetic algorithm is a search heuristic that is inspired by Charles Darwin’s theory of natural
evolution. This algorithm reflects the process of natural selection where the fittest individuals are
selected for reproduction in order to produce offspring of the next generation
1. Initial Population
The process begins with a set of individuals which is called a Population. Each individual is a
solution to the problem you want to solve.An individual is characterized by a set of parameters
(variables) known as Genes. Genes are joined into a string to form a Chromosome (solution).In a
genetic algorithm, the set of genes of an individual is represented using a string, in terms of an
alphabet. Usually, binary values are used (string of 1s and 0s). We say that we encode the genes
in a chromosome.
2. Fitness Function
3. Selection
The idea of selection phase is to select the fittest individuals and let them pass their genes to the
next generation. Two pairs of individuals (parents) are selected based on their fitness scores.
Individuals with high fitness have more chance to be selected for reproduction.
4. Crossover
58
Crossover is the most significant phase in a genetic algorithm. For each pair of parents to be
mated, a crossover point is chosen at random from within the genes.
5. Mutation
In certain new offspring formed, some of their genes can be subjected to a mutation with a low
random probability. This implies that some of the bits in the bit string can be flipped.
Algorithm/Pseudo code:
Algorithm/Pseudo code:
import random
import math
def objective(m):#Objective Function
return(abs((m[0]*m[0])))
def fitness(m): #Fitness Function
return(1/(1+m))
def probability(m,n): #Probability Function
return(m/n)
def crossover(m,n): #Crossover
pt=random.randint(0,3)
return(m[:pt+1]+n[pt+1:])
print("Minimize x^2")
pop1=[]
#Initialize random population
for i in range(6):
pop1.append([])
for j in range(4):
pop1[i].append(random.randint(0,30))
print("Initial Populaion: ")
for i in range(6):
print(pop1[i])
#Maximum iterations 10
for it in range(10):
obj=[0]*6
print("Objective Function: ")
for i in range(6):
obj[i]=objective(pop1[i])
print(obj[i])
fit=[0]*6
print("Fitness Value: ")
for i in range(6):
fit[i]=fitness(obj[i])
print(fit[i])
59
print("Total fitness: ",sum(fit))
prob=[0]*6
#Probability calculation
for i in range(6):
prob[i]=probability(fit[i],sum(fit))
#Commulative Probability Calculation
cmp=[0]*6
sum1=0
for i in range(6):
sum1+=prob[i]
cmp[i]=sum1
#Roulette Wheel Selection
R=[0]*6
for i in range(6):
R[i]=random.random()
new_pop=[]
for i in range(6):
for j in range(6):
if R[i]<=cmp[j]:
new_pop.append(pop1[j])
break
for i in range(6):
if CR[i]<cr:
par.append(new_pop[i])
par_index.append(i)
x=len(par)
for i in range(x):
a=random.randint(0,x-1)
b=random.randint(0,x-1)
new_pop[par_index[i]]=crossover(par[a],par[b])
#Mutation
mr=0.1 #mutation rate
total_mut=math.floor(24*mr)
mut_index=[]
60
mut_value=[]
for i in range(total_mut):
mut_index.append(random.randint(0,23))
mut_value.append(random.randint(0,30))
cr_num=mut_index[i]//4
gen_num=mut_index[i]%4
new_pop[cr_num][-(gen_num-1)]=mut_value[i]
Output:
After Selection: [[1, 7, 29, 27], [1, 7, 29, 27], [1, 7, 29, 27], [1, 7,
29, 27], [1, 7, 29, 27], [1, 7, 29, 27]]
After iteration: 8
[[2, 7, 9, 27], [2, 7, 9, 27], [2, 7, 9, 27], [1, 7, 29, 27], [2, 7, 9,
27], [2, 7, 9, 27]]
[196, 1, 196, 1, 1, 1]
Objective Function:
4
4
4
1
4
4
Fitness Value:
0.2
0.2
0.2
0.5
0.2
0.2
Total fitness: 1.5
After Selection: [[1, 7, 29, 27], [2, 7, 9, 27], [1, 7, 29, 27], [1, 7,
29, 27], [2, 7, 9, 27], [2, 7, 9, 27]]
After iteration: 9
[[1, 7, 29, 27], [17, 7, 9, 27], [1, 20, 29, 27], [1, 20, 29, 27], [17, 7,
9, 27], [17, 7, 9, 27]]
[4, 4, 4, 1, 4, 4]
Example:
61
Conclusion: Thus we have studied the genetic operations here in which contain crossbar,
selection etc.
62
Experiment No. : 12
Write a program to implement Travelling
Salesman Problem using genetic algorithm
63
Title: Write a program to implement Travelling Salesman Problem using genetic algorithm
Aim: Write a program to implement Travelling Salesman Problem using genetic algorithm.
Objectives: To understand the concept of genetic algorithm using travelling salesman problem.
Outcomes: The students will be able to implement Travelling Salesman Problem using genetic
algorithm.
Theory:
The travelling salesman problem follows the approach of the branch and bound algorithm that
is one of the different structures. This algorithm falls under the NP-Complete problem. It is also
popularly known as Travelling Salesperson Problem. The TSP algorithm states that – “From a
given set of N cities and distance between each pair of the cities, find the minimum path length
in such a way that it covers each and every city exactly once (without repetition of any path) and
terminate the traversal at the starting point or the starting city from where the traversal of the
TSP Algorithm was initiated.”
Algorithm/Pseudo code:
def create_point():
def compute_distance_matrix(point_set):
64
distance_matrix = defaultdict(dict)
for orig in point_set:
for dest in point_set:
distance_matrix[orig][dest] = distance_matrix[dest][orig] = distance(orig, dest)
return distance_matrix
total_distance = 0
return total_distance
def create_individual(point_set):
points = list(point_set)
random.shuffle(points)
return points
def plot_result(solution):
def plot_point_set(point_set):
point_list = list(point_set)
xs = [point[0] for point in point_list]
ys = [point[1] for point in point_list]
plt.scatter(xs, ys)
plt.axis('off')
plt.show()
65
return new_individual
def mutation_reverse():
reverse_start = random.randint(0, len(individual) - 2)
reverse_end = random.randint(reverse_start + 1, len(individual) - 1)
new_individual = individual[:reverse_start] + \
individual[reverse_start : reverse_end][::-1] + \
individual[reverse_end:]
return new_individual
ind_size = len(individual_1)
ind_1_subset_size = ind_size // 2
subset_ind_1_idx = generate_subset_idx(ind_1_subset_size)
ind_1_subset = select_subset(individual_1, subset_ind_1_idx)
ind_2_subset = complement_subset(individual_2, ind_1_subset)
population_size = len(population)
n_new_individuals = n_reproductions + n_mutations + n_news
n_survivors = population_size - n_new_individuals
reproductor_pool_size = round(reproductor_pool * population_size)
new_population = population[:n_survivors]
for _ in range(n_reproductions):
individual_1 = population[random.randint(0, reproductor_pool_size - 1)][0]
individual_2 = random.choice(population)[0]
new_population.append(reproduce(individual_1, individual_2, distance_matrix))
for _ in range(n_mutations):
66
individual_to_mutate = random.choice(population)[0]
new_population.append(mutate(individual_to_mutate, distance_matrix))
for _ in range(n_news):
new_individual = create_individual(point_set)
new_population.append((new_individual,
compute_solution_distance(new_individual, distance_matrix)))
distance_matrix = compute_distance_matrix(point_set)
population = create_population(population_size, point_set, distance_matrix)
for i in range(n_generations):
population = evolve(population, n_reproductions, n_mutations,
n_news, reproduction_pool, distance_matrix, point_set)
return population[0]
Output:
67
Example:
So, basically you have to find the shortest route to traverse all the cities without repeating any
city and finally end your journey from where you started.
Conclusion: Thus we have studied the implement Travelling Salesman Problem using genetic
algorithm.
68