0% found this document useful (0 votes)
19 views

AI Obse-2

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
19 views

AI Obse-2

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
You are on page 1/ 32

EX NO-01

DATE FACTORIAL

AIM:
To implement the Python program to find the factorial of number using the
three methods such that :-
Looping method
Recursion method
Stiriling’s approximation

ALGORITHM:

# method 1: Looping :

 Get the user input and initialize the fact = 1 .


 Iterate from 1 to n using a for loop.
 Use the following formula to calculate .
Fact = Fact * i
 The final output has been printed.

# method 2 : Recursion :

 Define a recursive function called factorial that taks an integer n as argument.


 Set up a base case if n is 1 or 0 return 1 since the factorial of 0 or 1 is 1
 In the recursive case, call the factorial function with n-1 as the argument and
multiply the result by n.
 Return the result obtained from the recursive call

# method 3: Stiriling’s approximation :

 Import the Math package .


 Define the function approximation takes an integer N as argument .
 Set up a base case if n is 1 or 0 return 1 since the factorial of 0 or 1 is 1 .
 Calculate the factorial by the formula
N! = sqrt (2*pi*N)*(N**N)*exp(-N)* exp(lambda)
Where 1/12*(N+1) < lambda < 1/(12 * N)

Lambda = difference (upper bound and lower bound) / 2 + lower bound


 The final output has been printed
 Error approximation = Absolute error / Exact error
RESULT:
The Python program to find the factorial of given number using the above 3
approaches have been executed successfully
EX NO 02
DATE FIBONACCI

AIM:
To implement the Python program to get the fibonacci sequence using recursion
and iteration also check Golden ratio with the fibonacci sequence.

ALGORITHM:

Iterative Approach
 Initialize variables a,b to 1
 Initialize for loop in range[1,n)
 Compute next number in series; total = a+b
 Store previous value in b
 Store total in a
Recursive Approach
 If n equals 1 or 0; return 1
 Else return fib(n-1) + fib(n-2)
Golden ratio (golden number = 1.618033)
 Golden ratio will divide the (fn+1)/f to get the golden ratio number in
Fibonacci
 The return result of the calculation will be same or nearer to the golden ratio

RESULT:

The Python program for the fibonacci is sequence and Golden Ratio from
the Fibonacci sequence has been implemented.
EX NO 02
QUADRATIC EQUATION
DATE
AIM:
The Python program to find the quadratic roots of the given equation

ALGORITHM:

Start
Read the values of a, b, c
Compute d = b2 – 4ac
If d > 0
calculate root1 = {-b + ?(b2 – 4ac)}/2a
calculate root2 = {-b – ?(b2 – 4ac)}/2a
else If d = 0
calculate root1 = root2 = (-b/2a)
else
calculate root1 = {-b + i?-(b2 – 4ac)}/2a
calculate root2 = {-b + i?-(b2 – 4ac)}/2a
print root1 and root2
End

RESULT:

The quadratic roots using Python program has been implemented and verified the equation X^2-X-1 = 0 provide
the golden ratio as positive roots
while pq:
(f, g, current, path) = heapq.heappop(pq)

# Check if the node has been visited


if current in visited:
continue

# Append current node to path


path = path + [current]

# Goal check
if current == goal:
return path, g

visited.add(current)

# Expand neighbors
for cost, neighbor in self.edges[current]:
if neighbor not in visited:
new_g = g + cost
new_f = new_g + self.h[neighbor]

heapq.heappush(pq, (new_f, new_g, neighbor, path))

return None, float('inf') # If no path is found

#Sample input:

# Initialize the graph


graph = Graph()
OUTPUT:
# Add edges (undirected)
graph.add_edge('A', 'B', 2)
graph.add_edge('A', 'E', 3)
graph.add_edge('B', 'C', 1)
graph.add_edge('B', 'G', 9)
graph.add_edge('E', 'D', 6)
graph.add_edge('D', 'G', 1)

# Set heuristic values (h function)


graph.set_heuristic('A', 11)
graph.set_heuristic('B', 6)
graph.set_heuristic('C', 99)
graph.set_heuristic('D', 1)
graph.set_heuristic('E', 7)
graph.set_heuristic('G', 0)
# Run A* search
path, cost = graph.astar('A', 'G')
print(f"Path: {path}, Cost: {cost}")

RESULT:
The A* search algorithm successfully found the shortest path from node 'A' to node
'G' in the given graph by intelligently combining actual path costs with heuristic
estimates. This example demonstrates the effectiveness of A* in navigating complex
graphs efficiently.
EX NO-
03 DATE EXPLORE THE BASIC PROBABILITY FUNCTIONS

AIM:
The aim of this program is to generate random samples from different statistical
distributions (Normal, Bernoulli, Discrete Uniform, and Continuous Uniform) and
visualize their properties by plotting histograms of the generated samples.

ALGORITHM:
1. Normal Distribution

Input Parameters:

o Mean (μ\muμ): The average of the distribution.


o Standard deviation (σ\sigmaσ): The measure of the spread of the
distribution.
o Sample size: Number of random samples to generate.
Generate Samples:
o Use the normal distribution formula to generate random samples with
the specified mean and standard deviation.
Plot the Distribution:
o Create a histogram of the generated samples.
o Set the title, labels for the x-axis (Value), and y-axis (Density).
o Display the plot.
2. Bernoulli Distribution

Input Parameters:

o Probability of success (ppp): The probability of getting a "1" in each


trial.
o Sample size: Number of random samples to
generate. Generate Samples:

o Use the Bernoulli distribution to generate random samples, where each


sample is either 0 or 1.
Plot the Distribution:
o Create a histogram of the generated samples.
o Set the title, labels for the x-axis (Outcome), and y-axis (Density).
p
o Display the plot with ticks on the x-axis for the outcomes (0 and 1).
3. Discrete Uniform Distribution

o Lower bound (low): The minimum integer value.


o Upper bound (high): The maximum integer value.
o Sample size: Number of random samples to
generate. Generate Samples:

o Use the discrete uniform distribution to generate random samples


from the specified integer range.

o Create a histogram of the generated samples.


o Set the title, labels for the x-axis (Value), and y-axis (Density).
o Display the plot with ticks for each possible value in the range.
4. Continuous Uniform Distribution

o Lower bound (low): The minimum value.


o Upper bound (high): The maximum value.

o Sample size: Number of random samples to generate.


Generate Samples:
o Use the continuous uniform distribution to generate random samples
within the specified range.

o Create a histogram of the generated samples.


o Set the title, labels for the x-axis (Value), and y-axis (Density).
o Display the plot.

PROGRAM:
1. Normal Distribution

import numpy as np

import matplotlib.pyplot as plt


import scipy.stats as stats
OUTPUT:
mean = 0
std_dev = 1
sample_size = 1000
normal_samples = np.random.normal(mean, std_dev, sample_size)

plt.hist(normal_samples, bins=30, density=True, alpha=0.6,


color='b', edgecolor='black')
xmin, xmax = plt.xlim()
x = np.linspace(xmin, xmax, 100)
p = stats.norm.pdf(x, mean, std_dev)
plt.plot(x, p, 'k', linewidth=2)
plt.title('Normal Distribution with Bell Curve')
plt.xlabel('Value')
plt.ylabel('Density')
plt.show()
2. Bernoulli Distribution

import numpy as np

import matplotlib.pyplot as plt


p = 0.5
sample_size = 1000

bernoulli_samples = np.random.binomial(1, p, sample_size)


plt.hist(bernoulli_samples, bins=2, density=True, alpha=0.6,
color='g') plt.title('Bernoulli Distribution')
plt.xlabel('Outcome')
plt.ylabel('Density')
plt.xticks([0, 1])
plt.show()
3.Discrete Uniform Distribution
import numpy as np
low = 1
high = 6
sample_size = 1000
discrete_uniform_samples = np.random.randint(low, high + 1, sample_size)
plt.hist(discrete_uniform_samples, bins=range(low, high + 2),
density=True, alpha=0.6, color='r')
plt.title('Discrete Uniform Distribution')
plt.xlabel('Value')
plt.ylabel('Density')
plt.show()
4.Continuous Uniform Distribution
import numpy as np
import
low = 0
high = 1
sample_size = 1000
continuous_uniform_samples = np.random.uniform(low, high, sample_size)

plt.hist(continuous_uniform_samples, bins=30, density=True,


alpha=0.6, color='purple')
plt.title('Continuous Uniform Distribution')
plt.xlabel('Value')
plt.ylabel('Density')
plt.show()

RESULT:
The generated distributions illustrate different probability scenarios: Normal shows
a bell curve where values cluster around the mean; Bernoulli models binary
outcomes based on a success probability. Discrete Uniform gives equal likelihood to
integers within a range, while Continuous Uniform evenly spreads probability across
a continuous interval.
EX NO-04

DATE POSTERIOR PROBABILITY

AIM:
The aim of calculating the posterior probability in the given problem is to
determine the probability that a person actually has the rare disease given that they
have tested positive for it.
ALGORITHM:
1. Define Prior Probabilities:
o Establish the initial probabilities of the events of interest, in this case,
the probability of having the disease (p_disease) and not having the
disease (p_no_disease).

2. Define Conditional Probabilities (Likelihoods):


o Determine the probabilities of observing certain evidence given the
occurrence of each event.
o Here, it's the probability of testing positive given the presence of the disease
(p_pos_given_disease) and the probability of testing negative given the
absence of the disease (p_neg_given_no_disease).

3. Calculate Probability of Evidence:


o Use the law of total probability to calculate the overall probability of
observing the evidence, in this case, the probability of testing
positive (p_pos).
o This involves summing the probabilities of testing positive under both
scenarios (having the disease and not having it), weighted by their
respective prior probabilities.

4. Apply Bayes' Theorem:


o Utilize Bayes' Theorem to calculate the posterior probability, which is the
updated probability of an event occurring given the observed evidence.
o In this scenario, it's the probability of having the disease given a positive
test result (p_disease_given_pos). This is calculated using the formula:
P(Disease | Positive Test) = [P(Positive Test | Disease) *
P(Disease)] / P(Positive Test)
5. Construct Joint Probability Distribution Table:

o Create a table that summarizes the probabilities of all possible


combinations of events and their corresponding evidence.
o In this case, it includes the probabilities of testing positive or negative
given the presence or absence of the disease.

6. Plot:
o Visualize the joint probability distribution using a bar plot to gain a better
understanding of the relationships between the events and their
probabilities.

PROGRAM:
p_disease = float(input("Enter the overall probability of having the disease : "))
p_pos_given_disease = float(input("Enter the probability of testing positive given
having the disease : "))
p_neg_given_no_disease = float(input("Enter the probability of testing negative given
not having the disease : "))
p_no_disease = 1 - p_disease

p_pos = (p_pos_given_disease * p_disease) + ((1 - p_neg_given_no_disease)


* p_no_disease)
p_disease_given_pos = (p_pos_given_disease * p_disease) / p_pos
print("Posterior probability of having disease given positive test:
", p_disease_given_pos)
joint_prob_table = {
('Positive Test', 'Disease'): p_pos_given_disease * p_disease,
('Positive Test', 'No Disease'): (1 - p_neg_given_no_disease) * p_no_disease,
('Negative Test', 'Disease'): (1 - p_pos_given_disease) * p_disease,
('Negative Test', 'No Disease'): p_neg_given_no_disease * p_no_disease
}
import pandas as pd
df = pd.DataFrame(joint_prob_table, index=['Probability']).transpose()
print("\nJoint Probability Distribution Table:")
print(df)
INPUT:

OUTPUT:
import matplotlib.pyplot as plt
df.plot(kind='bar', figsize=(8, 6))
plt.title('Joint Probability Distribution')
plt.ylabel('Probability')
plt.xlabel('(Test Result, Disease Status)')
plt.xticks(rotation=0)
plt.tight_layout()
plt.show()

RESULT:
The posterior probability provides the updated likelihood of having the disease given
a positive test result. It combines prior knowledge about the disease prevalence with
the test's accuracy.
EX NO- BAYESIAN NETWORK: IMPLEMENTING CONDITIONAL
PROBABILITY DISTRIBUTIONS
05 DATE

AIM:
To implement a Bayesian Network using pgmpy for modeling dependencies
between random variables, define Conditional Probability Distributions (CPDs),
check model consistency, and visualize the network.
ALGORITHM:
1. Define the Bayesian Network Structure:
• Create a Bayesian Network and add nodes representing variables
like Low CW, Unknown SR, RT-too high, P too high, and Alarm.
• Add directed edges between nodes to indicate dependencies.
2. Define Conditional Probability Distributions (CPDs):
• Define CPDs for each variable:
o Low CW: Represents the probability of Low CW being true or false.
o Unknown SR: Represents the probability of Unknown SR being true or
false.
o RT-too high: Probability based on Low CW and Unknown SR.
o P too high: Probability based on Blockage.
o Alarm: Probability based on RT-too high and P too high.
3. Add CPDs to the Model:
• Add each CPD to the Bayesian Network model.
4. Check Model Consistency:
• Validate the model to ensure the network is correctly structured
with consistent CPDs.
5. Visualize the Bayesian Network:
• Use NetworkX to visualize the Bayesian Network.
6. Print the CPDs:
• Output the CPDs for each variable to understand the defined probabilities.

PROGRAM:
import networkx as nx
import matplotlib.pyplot as plt
OUTPUT:
from pgmpy.models import BayesianNetwork
from pgmpy.factors.discrete import TabularCPD

# Define the Bayesian Network


structure model = BayesianNetwork([
('Low CW', 'RT-too high'),
('Unknown SR', 'RT-too
high'), ('Blockage', 'P too
high'), ('RT-too high', 'Alarm'),
('P too high', 'Alarm')
])

# Define the CPDs


low_cw_cpd = TabularCPD(variable='Low CW', variable_card=2,
values=[[0.9], [0.1]])
unknown_sr_cpd = TabularCPD(variable='Unknown SR', variable_card=2,
values=[[0.8], [0.2]])
rt_too_high_cpd = TabularCPD(variable='RT-too high', variable_card=2,
values=[[0.10, 0.20, 0.30, 0.40],
[0.90, 0.80, 0.70, 0.60]],
evidence=['Low CW', 'Unknown SR'], evidence_card=[2,
2]) p_too_high_cpd = TabularCPD(variable='P too high', variable_card=2,
values=[[0.95, 0.05],
[0.05, 0.95]],
evidence=['Blockage'], evidence_card=[2])
alarm_cpd = TabularCPD(variable='Alarm', variable_card=2,
values=[[0.95, 0.80, 0.70, 0.05],
[0.05, 0.20, 0.30, 0.95]],
evidence=['RT-too high', 'P too high'], evidence_card=[2, 2])

# Add CPDs to the model


model.add_cpds(low_cw_cpd, unknown_sr_cpd, rt_too_high_cpd,
p_too_high_cpd, alarm_cpd)

# Check the model for


consistency model.check_model()

# Visualize the Bayesian Network pos =


nx.circular_layout(model)
nx.draw_networkx_edges(model, pos=pos)
nx.draw(model, with_labels=True, node_size=2000, node_color='skyblue', pos=pos)
plt.show()

# Print the CPDs


print("Conditional Probability Table for Low CW:")
print(low_cw_cpd)

print("\nConditional Probability Table for Unknown SR:")


print(unknown_sr_cpd)

print("\nConditional Probability Table for RT-too high:")


print(rt_too_high_cpd)

print("\nConditional Probability Table for P too high:")


print(p_too_high_cpd)

print("\nConditional Probability Table for Alarm:")


print(alarm_cpd)

RESULT:
The Bayesian Network was successfully implemented, with defined
CPDs and verified model consistency. The network's structure and CPDs were
visualized, demonstrating the dependencies between variables effectively.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy