Soft-computing-lab
Soft-computing-lab
REGISTER NUMBER
NAME:
ERODE – 638316
RECORD NOTEBOOK
Reg. No.
AIM:
To implement a fuzzy inference system for decision-making based on fuzzy logic
principles.
PROCEDURE:
4. Apply fuzzification and inference to process input through the rule base.
PROGRAM:
import numpy as np
1
# Define fuzzy membership functions for fan speed
plt.figure(figsize=(10, 5))
plt.subplot(1, 2, 1)
plt.xlabel('Temperature (°C)')
plt.ylabel('Membership Degree')
plt.legend()
plt.subplot(1, 2, 2)
plt.ylabel('Membership Degree')
2
plt.legend()
plt.tight_layout()
plt.show()
temp_input = 25
aggregated_fan_speed = np.fmax(fan_activation_low,
np.fmax(fan_activation_medium, fan_activation_high))
3
# Display the results
OUTPUT:
RESULT:
The fuzzy inference system was successfully implemented, providing
accurate and logical outputs based on the defined fuzzy rules and input
conditions.
4
Ex.no: 2
Programming exercise on classification with
A discrete perceptron
Date:
AIM:
To implement a discrete perceptron algorithm to classify data points into two classes
using supervised learning.
PROCEDURE:
1. Initialize parameters:
Prepare a small set of labeled data points (e.g., 2D features with class labels -1
or +1).
3. Training process:
output = sign(w·x + b)
4. Repeat steps:
Continue iterating through the dataset until all points are classified correctly (or
max iter ations reached).
5
5. Test the perceptron:
PROGRAM:
import numpy as np
class Perceptron:
self.weights = np.zeros(input_size)
self.bias = 0
for _ in range(epochs):
prediction = self.predict(x)
error = y - prediction
def main():
6
# Combine and label data (0 for class_0, 1 for class_1)
inputs = np.vstack((class_0, class_1))
targets = np.array([0, 0, 0, 1, 1, 1])
# Create perceptron
perceptron = Perceptron(input_size=2)
# Train perceptron
perceptron.train(inputs, targets, learning_rate=0.1, epochs=50)
7
OUTPUT:
RESULT:
8
Ex.no: 3
Implementation of XOR with backpropagation
Date: algorithm
AIM:
To implement the XOR logic gate using a neural network trained with
the Backpropagation algorithm.
ALGORITHM:
9
PROGRAM :
import numpy as np
# Sigmoid activation function and its derivative
def sigmoid(x):
return 1 / (1 + np.exp(-x))
def sigmoid_derivative(x):
return x * (1 - x)
# input and output for XOR logic gate
input_data = np.array([[0, 0],
[0, 1],
[1, 0],
[1, 1]])
target_output = np.array([[0],
[1],
[1],
[0]])
# Neural network architecture
input_size = 2
hidden_size = 2
output_size = 1
learning_rate = 0.5
epochs = 10000
# Initialize weights and biases
np.random.seed(42)
hidden_weights = np.random.uniform(size=(input_size, hidden_size))
hidden_bias = np.random.uniform(size=(1, hidden_size))
output_weights = np.random.uniform(size=(hidden_size, output_size))
output_bias = np.random.uniform(size=(1, output_size))
10
# Training the network
for _ in range(epochs):
# Forward pass
hidden_layer_input = np.dot(input_data, hidden_weights) + hidden_bias
hidden_layer_output = sigmoid(hidden_layer_input)
final_input = np.dot(hidden_layer_output, output_weights) + output_bias
predicted_output = sigmoid(final_input)
# Calculate error
error = target_output - predicted_output
# Backpropagation
d_predicted_output = error * sigmoid_derivative(predicted_output)
error_hidden_layer = d_predicted_output.dot(output_weights.T)
d_hidden_layer = error_hidden_layer *
sigmoid_derivative(hidden_layer_output)
# Update weights and biases
output_weights += hidden_layer_output.T.dot(d_predicted_output) *
learning_rate
output_bias += np.sum(d_predicted_output, axis=0, keepdims=True) *
learning_rate
hidden_weights += input_data.T.dot(d_hidden_layer) * learning_rate
hidden_bias += np.sum(d_hidden_layer, axis=0, keepdims=True) *
learning_rate
# Test the network
print("Final predictions after training:")
for i in range(len(input_data)):
print(f"Input: {input_data[i]} => Predicted:
{predicted_output[i][0]:.4f}")
11
OUTPUT:
RESULT:
The XOR function was successfully implemented using a neural
network with the Backpropagation algorithm.
12
Ex.no: 4
Implementation of self- organizing
maps for a specific application
Date:
AIM:
PREREQUISITES:
13
8. Interpretation: Analyze clusters for insights.
CODE:
import numpy as np
import pandas as pd
df = pd.read_csv('Mall_Customers.csv')
scaler = MinMaxScaler()
X_scaled = scaler.fit_transform(X)
som.random_weights_init(X_scaled)
som.train_random(data=X_scaled, num_iteration=100)
bone()
pcolor(som.distance_map().T) colorbar()
14
for x in X_scaled:
w = som.winner(x)
markeredgecolor='r', markersize=8)
plt.show()
plt.savefig('/content/som_result.png')
df['cluster'] = cluster_ids
cluster_counts = df['cluster'].value_counts().sort_index()
cluster_profile = df.groupby('cluster').agg({
'Age': 'mean',
'CustomerID': 'count'
}).rename(columns={'CustomerID': 'Count'})
15
OUTPUT:
RESULT:
The Self-Organizing Map was successfilly implemented and
customer segments were identified.
16
Ex.no: 5 Programming exercises on maximizing a
function using Genetic algorithm.
Date:
AIM:
PROCEDURE:
10. Output: Display best solution and plot fitness over generations.
PROGRAM:
import random
import math
import matplotlib.pyplot as plt
17
# Fitness function
def fitness(x):
return x * math.sin(10 * math.pi * x) + 1
# Initialize population
def init_population(size, length):
return [''.join(random.choice('01') for _ in range(length)) for _ in range(size)]
# Tournament selection
def select(pop, scores, k=3):
selected = random.choices(list(zip(pop, scores)), k=k)
return max(selected, key=lambda x: x[1])[0]
# Crossover
def crossover(p1, p2, rate=0.8):
if random.random() < rate:
point = random.randint(1, len(p1) - 1)
return p1[:point] + p2[point:], p2[:point] + p1[point:]
return p1, p2
# Mutation
def mutate(chrom, rate=0.01):
return ''.join(bit if random.random() > rate else '1' if bit == '0' else '0' for bit in
chrom)
# Genetic Algorithm
def genetic_algorithm(pop_size=50, length=16, gens=100, crossover_rate=0.8,
mutation_rate=0.01):
population = init_population(pop_size, length)
best_score = -1
best_chrom = ""
best_fitness_history = []
18
average_fitness_history = []
# Optional print
print(f"Generation {generation+1}: Best Fitness = {scores[best_idx]:.5f},
x = {decode(population[best_idx]):.5f}")
19
plt.figure(figsize=(10, 5))
plt.plot(best_history, label='Best Fitness')
plt.plot(avg_history, label='Average Fitness')
plt.title('Fitness Over Generations')
plt.xlabel('Generation')
plt.ylabel('Fitness')
plt.legend()
plt.grid(True)
plt.show()
OUTPUT:
Python genalgorithm.py
RESULT:
Thus the Maximizing a function using Genetic algorithm was written and
implemented successfully.
20
Ex.no: 6
Implementation of two input sine function
Date:
AIM:
To implement a Python program that computes and visualizes the sine of the
two input.
PROCEDURE:
21
6. Display the results
Use print() statements to show the values of sin(x), sin(y),
sin(x) + sin(y) and sin(x+y) clearly.
PROGRAM:
import math
# Display results
print(f"sin({x_deg}) = {sin_x:.4f}")
print(f"sin({y_deg}) = {sin_y:.4f}")
print(f"sin({x_deg}) + sin({y_deg}) = {sin_sum:.4f}")
print(f"sin({x_deg} + {y_deg}) = sin({x_deg + y_deg}) = {sin_xy:.4f}")
22
OUTPUT:
RESULT:
Thus the program were successfully implemented using the math library.
23
Ex.no: 7
Implementation of three input non-linear
Date: function
AIM:
To implement three input non-linear function using a sigmoid activation.
PROCEDURE:
Step 1: Define the mathematical structure of the non-linear function.
f(x,y,z) = a⋅x2+b⋅y3+sin(z)
Ensure you have the necessary Python libraries to handle the math and any
additional functionality. For a non-linear function, you would often use NumPy
for efficient computation.
Handle edge cases for inputs such as zero values, negative values, or values
that might lead to mathematical exceptions (like taking the sine of a large
number).
24
Step 6: Optionally visualize the function's behavior.
Visualize how the function behaves with different inputs, you can plot the
output for a range of values.
PROGRAM:
import numpy as np
# Sigmoid activation function
def sigmoid(x):
return 1 / (1 + np.exp(-x))
# Weighted sum
net_input = w1 * x1 + w2 * x2 + w3 * x3 + bias
25
x2 = float(input("Enter input x2: "))
x3 = float(input("Enter input x3: "))
# Compute output
result = nonlinear_function(x1, x2, x3)
# Display result
print(f"Nonlinear function output: {result:.4f}")
OUTPUT:
>python threeipfn.py
RESULT:
26
27