0% found this document useful (0 votes)
41 views34 pages

AI After mids chap 7 to 10

Uploaded by

topwithmee
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
41 views34 pages

AI After mids chap 7 to 10

Uploaded by

topwithmee
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 34

AI After Mids Notes

CHAPTER 7: Detecting
Patterns with
Unsupervised Learning

Topics:
1) What is unsupervised
learning?
Unsupervised learning ka matlab hai jab computer khud
patterns aur structures ko samjhe baghair data se seekhta hai.
Ismein, hum computer ko labeled examples nahi dete.
Computer ko bas data di jati hai aur wo usme se apne aap
patterns aur clusters dhoondhta hai. Is tarah se computer khud
se data mein similarities aur relationships ko explore karta hai.
Unsupervised learning techniques data exploration, clustering,
dimensionality reduction, anomaly detection, aur pattern
recognition mein istemal hote hain.
2) Clustering data with the K-
Means algorithm:

 Clustering ek popular unsupervised learning technique hai jo


data ko analyze karke usme clusters ya subgroups dhundne
ke liye istemal hota hai. Clustering mein hum similarity
measurement jaise Euclidean distance ka istemal karte hain
subgroups dhundne ke liye. Ye similarity measure cluster ki
tightness ko estimate kar sakta hai. Clustering, data ko uske
similar elements ke subgroups mein organize karne ka
process hai.

 Algorithm ka goal hota hai ki woh data points ki intrinsic


properties ko pehchane jo unhe ek hi subgroup mein belong
karati hai. Har situation mein ek universal similarity metric
nahi hota. For example, hum subgroup ke representative
data point ko dhundna chahte hain ya fir data mein outliers
ko dhundna chahte hain. Situation ke anusaar, alag-alag
metrics zyada suitable ho sakte hain.

 K-Means algorithm ek prasiddh clustering algorithm hai. Isko


istemal karne ke liye, clusters ki count pehle se hi assume ki
jati hai. Data ko various attributes ka istemal karke K
subgroups mein segment kiya jata hai. Cluster ki count fix
hoti hai aur data us count ke basis par classify hota hai.
Ismein centroids ki locations ko har iteration mein update
karna hota hai. Centroids cluster ka center represent karte
hain. Hum iterations karte rahenge jab tak centroids ko unki
optimal locations pe nahi rakhte.

 Initial centroids ki placement algorithm mein important role


play karti hai. Ye centroids intelligent tareeke se place kiye
jana chahiye kyunki ye seedhe taur pe results ko affect karte
hain. Acha tarika ye hai ki hum unhe ek dusre se jitna door
ho sake rakhe.

 Basic K-Means algorithm initial centroids ko random tareeke


se place karta hai, jabki K-Means++ algorithm initial
centroids ko input data points ki list se algorithmically choose
karta hai. Ismein ye koshish ki jati hai ki initial centroids ek
dusre se jaldi converge ho jaye. Fir hum training dataset
mein iterate karte hain aur har data point ko closest centroid
se assign karte hain.

 Puri dataset ke baad, pehla iteration complete ho jata hai.


Points initial centroids ke basis pe groups mein divide ho jate
hain. Centroids ki locations, pehle iteration ke end mein
milne wale new clusters ke basis pe recalculate ki jati hai.
Jab naye set ke K centroids mil jate hain, process repeat hoti
hai. Hum dataset pe iterate karte hain aur har point ko
closest centroid se assign karte hain.

 Jab ye steps repeat hote rahenge, centroids apni equilibrium


positions ki taraf move karte rahenge. Kuch iterations ke
baad, centroids apni locations change nahi karte hain.
Centroids ek final location pe converge ho jate hain. Ye K
centroids inference ke liye istemal kiye jayenge.

“Book me coding bhi ki hui ha ok but sir coding krwaty to thy


nahi”

4) Estimating the number of


clusters with the Mean Shift
algorithm:

Mean Shift algorithm se clusters ki tadad ka andaza lagana, kisi


bhi predefined tadad ke bina, data ke upar tajweez karta hai. Is
algorithm ka maqsad data points ko unke higher density regions
ki taraf iterative tareeke se shift karna hai.

Mean Shift algorithm ka tareeka(point) yeh hai:

 Kernel Density Estimation: Har seed point (shuruaati


point) ke aas paas ek diye gaye radius ke andar har data
point ko uski distance ke aadhar par wazan diya jata hai. Ye
wazan har seed point ke aas paas maujood data points ki
local density ko reflect karta hai.
 Mean Shift: Har data point ke liye, algorithm uske aas
paas ke weighted data points ka mean calculate karta hai. Is
mean ki wajah se data point higher density region ki taraf
shift hota hai.

 Update: Data point ko pichle qadam mein milne wale


mean location ki taraf shift kiya jata hai. Ye qadam tab tak
repeat hota hai jab tak convergence na ho jaye.

 Convergence us waqt hota hai jab data points higher density


regions mein settle ho jate hain aur unko shift karna band ho
jata hai. Is point pe, data points ek hi cluster mein samjhe
jate hain.

Mean Shift algorithm ke zariye clusters ki tadad ka andaza lagana


data pe based hota hai. Is algorithm ko pehle se clusters ki tadad
specify karne ki zarurat nahi hoti hai. Balki ye automatic tareeke
se clusters ki tadad ko pehchan leta hai, data space mein
convergence points ya regions ki madad se.

Convergence ke baad, data points ki final positions ko dekhkar,


hum data mein maujood clusters ki tadad ka andaza laga sakte
hain. Har convergence region ek alag cluster ko represent karta
hai.

Mean Shift algorithm ki kuch fawaid hai jaise iska istemal


irregularly shaped clusters ke saath karne mein aur ye kehna ki
yeh clusters ki tadad ka andaza lagane mein adaptive hai. Lekin,
ye large datasets ke saath computational complexity ki wajah se
kathinaiyaan pesh kar sakta hai.

5) What are Gaussian Mixture


Models?
Gaussian Mixture Models (GMMs) unka model hai jo probability
distribution ko represent karta hai, jisme multiple Gaussian
distributions combine hote hain. Ye model, unsupervised learning
mein istemal hota hai aur data points ko clusters mein assign
karne ka kaam karta hai.

GMMs ka main concept hai ki har cluster ek Gaussian distribution


se represent kiya jata hai. Har ek Gaussian distribution apne
mean (average) aur covariance (dispersion) ke saath aata hai.
GMMs mein har cluster ki weights bhi hoti hain, jo batati hain ki
har Gaussian distribution ka kitna contribution hai overall
probability distribution mein.

GMMs ka estimation process, Expectation-Maximization (EM)


algorithm ke through hota hai. Ismein, pahle initial values se
shuruwat hoti hai. Uske baad, EM algorithm steps ko iteratively
repeat karte hue model ko improve karta hai. Ye steps hai:

1)E-Step (Expectation): Is step mein, har data point ke


liye har ek cluster ka responsibility score calculate kiya jata hai.
Ye score batata hai ki data point kis cluster ke liye zyada suitable
hai. Isko hum Gaussian distributions ki probabilities aur weights
ke saath calculate karte hain.

2)M-Step (Maximization): Is step mein, responsibility


scores ko istemal karke har cluster ka mean aur covariance
update kiya jata hai. Ye update us tarah hota hai jisse log-
likelihood (data points ke likelihood ko measure karne wala ek
metric) maximum ho.

EM algorithm ke iterations ko continue kiya jata hai jab tak


convergence na ho jaye, ya fir maximum number of iterations tak.
Jab convergence ho jata hai, GMMs final values of means,
covariances, aur weights ko provide karte hain.

GMMs ka istemal data clustering, data point generation, aur


anomaly detection jaise tasks mein kiya jata hai. Ye model
clusters ko flexible tareeke se represent kar sakta hai, jahan har
cluster ki shape, size, aur orientation alag ho sakti hai. Isliye
GMMs, complex data distributions ko model karne mein kaafi
powerful hote hain.

Gaussian Mixture Models (GMMs) unsupervised learning mein


clusters ko represent karne aur probability distribution ko model
karne ke liye istemal hota hai. Ye model multiple Gaussian
distributions ko combine karta hai aur EM algorithm ke through
optimize kiya jata hai. GMMs flexible hai aur complex data
distributions ko model karne mein upyogi hote hain.
6) Building a classifier based on
Gaussian Mixture Models
Gaussian Mixture Models (GMMs) ka istemal classifiers banane
ke liye bhi kiya ja sakta hai. GMMs, apni clustering capabilities ke
saath-saath probabilistic classifiers banane mein bhi istemal ho
sakte hain.

GMMs pe based classifier banane ka process neeche diye gaye


steps par mushtamil hota hai:

1)Data Preparation: Ek labeled dataset jama karen, jahan


har data point ko ek specific class ya label ke saath joda gaya ho.
Data ko preprocess karen aur training ke liye tayyar karen.

2)Model Training: Labeled training data pe GMM fit karen,


jahan har class GMM ke andar ek alag Gaussian distribution ke
saath represent ho. GMM ke clusters ki tadad, dataset mein
distinct classes ki tadad ke barabar honi chahiye.

3)Probability Estimation: GMM training hone ke baad, ise


istemal karke naye, unlabeled data point ka probability estimate
kiya ja sakta hai ki wo kis class mein belong kar sakta hai. Ye
estimate har Gaussian distribution ke andar data point ki
likelihood ke aadhar par calculate kiya jata hai. Probability, Bayes'
theorem ya posterior probability ka istemal karke calculate kiya
jata hai.

4)Class Prediction: Naye data point ke liye class prediction


us class ko select karke kiya jata hai jiske estimated probability
sabse zyada hoti hai. Isse data point ko corresponding class label
assign kiya jata hai.

5)Model Evaluation: GMM-based classifier ka performance


evaluate karen, accuracy, precision, recall, ya F1 score jaise
evaluation metrics ka istemal karke. Ye metrics batate hain ki
classifier kitni acchi tarah test ya validation dataset pe perform kar
raha hai.

Yeh yaad rakhna zaruri hai ki GMM-based classifiers assume


karte hain ki har class ke andar underlying data distribution ek
Gaussian distribution ke through represent hoti hai. Agar data
non-Gaussian characteristics ya complex dependencies dikha
rahi hai, toh dusre classifiers zyada suitable ho sakte hain.
7) Finding subgroups in stock
market using the Affinity
Propagation model:

Affinity Propagation model ka istemal karke stock market mein


subgroups ya clusters ko dhoondhna ek aur tareeka hai. Ye
model unsupervised learning mein istemal hota hai aur data
points ko similar subgroups mein assign karta hai.

Affinity Propagation model ka istemal karke subgroups


dhoondhne ka process neeche diye gaye steps par mushtamil
hota hai:

1)Data Preparation: Stock market data ko gather karen aur


usko pre-process karen taki wo model ke liye tayyar ho. Ismein
stock prices, trading volumes, technical indicators, aur anya
relevant data shamil ho sakte hain.

2)Similarity Matrix Banana: Similarity matrix ko tayyar


karen, jahan har data point ke saath dusre data points ke
similarity score shamil ho. Similarity score calculate karne ke liye
Euclidean distance, correlation coefficient, ya koi dusra similarity
metric istemal kiya ja sakta hai.
3)Affinity Propagation Model Training: Similarity
matrix ke saath, Affinity Propagation model ko training karen. Ye
model har data point ko "exemplar" ke roop mein represent karta
hai. Exemplar, ek cluster ka representative point hota hai.

4)Message Passing: Affinity Propagation model ke andar,


har data point apne saare dusre data points ke saath "messages"
bhejte hain. Ye messages exemplar ke selection aur cluster
assignments ke liye use hote hain. Ye messages iteration process
ke through bheje jate hain, jahan har data point apne exemplar ke
selection mein influence exert karta hai.

5)Convergence aur Cluster Assignments: Affinity


Propagation model iterations continue karta hai jab tak
convergence na ho jaye. Convergence ke baad, har data point ko
uske assigned exemplar ke saath ek cluster mein include kiya jata
hai.

Affinity Propagation model ka istemal karke, stock market data


mein subgroups ya clusters ka exploration kiya jata hai. Ye
clusters stocks ke similar patterns aur dynamics ko identify karne
mein madad karte hain. Isse investors aur traders ko market
trends aur investment opportunities samajhne mein madad milti
hai.

Ye tareeka stock market analysis ke liye ek powerful tool hai, jiski


madad se market mein maujood stocks ko subgroups mein
organize kiya ja sakta hai.
8) Segmenting the market based
on shopping patterns
Market segmentation, yaani market ko segments mein divide
karna, shopping patterns ke aadhar par kiya ja sakta hai. Is
tareeke se market segments banaye jate hain jo similar shopping
behaviors aur preferences wale customers ko jodte hain. Ye
segmentation process, data analysis aur customer behavior ki
samajh par tika hota hai. Market segments ko analyze karke,
businesses apne products aur services ko target kar sakte hain.
Isse unko ek focused approach milta hai jahan unki marketing
strategies aur communication unke target customers ko zyada
effectively reach kar sakti hai. Market segmentation, businesses
ke liye ek zaroori tareeka hai takay wo apne customers ko behtar
tareeke se samajh sake aur unki zaroorat ko pura kar sake.

Chapter 7 END
Chapter 8: Building
Recommender Systems

Topics:

1) Extracting the nearest


neighbors:
"Extracting the nearest neighbors" ka matlab hota hai "nazdeeki
padosiyon ko nikalna". Is topic mein hum baat karenge kisi bhi data
point ke nazdeeki padosiyon ko kaise extract kiya ja sakta hai.

Nazdeeki padosiyon ko nikalne ke liye, kuch popular algorithms aur


techniques istemal kiye jate hain, jaise ki:

K-Nearest Neighbors (KNN): KNN algorithm mein, har data


point ke nazdeeki K padosiyon ko extract kiya jata hai. Yeh
padosiyon ko calculate karne ke liye, Euclidean distance ya dusre
similarity measures ka istemal hota hai. KNN ka istemal
classification aur regression problems mein kiya jata hai.
Locality Sensitive Hashing (LSH): LSH technique mein, data
points ko hash functions ki madad se buckets mein map kiya jata hai.
Isse, similar data points wohi buckets mein jama ho jate hain jo
nazdeeki padosiyon ko represent karte hain. LSH ka istemal large-
scale datasets mein nearest neighbor search ke liye kiya jata hai.

Annoy (Approximate Nearest Neighbors Oh Yeah): Annoy


ek approximate nearest neighbor search library hai. Yeh high-
dimensional data mein efficient nearest neighbor search ko support
karta hai. Iska istemal recommendation systems aur information
retrieval mein kiya jata hai.

Ye the kuch techniques jinse hum data points ke nazdeeki padosiyon


ko extract kar sakte hain.

2) Building a K-Nearest
Neighbors classifier:
"Building a K-Nearest Neighbors classifier" ka matlab hota hai "K-
Nearest Neighbors classifier banane ka tareeka". Is topic mein hum
baat karenge K-Nearest Neighbors (KNN) classifier banane ke
process ke bare mein.

K-Nearest Neighbors (KNN) classifier ek supervised learning


technique hai, jisme har data point ko uske K nazdeeki padosiyon ke
aadhar par classify kiya jata hai. Is classifier ko banane ke tareeke par
discussion kiya jata hai:

1)Data Preparation: Sabse pehle, labeled training data ko


taiyar karna hota hai, jahan har data point ke saath uska
corresponding class ya label diya gaya hota hai. Data ko preprocess
karna aur usko feature vectors mein represent karna bhi shamil hota
hai.

2)KNN Parameters: KNN classifier ke liye K parameter ko set


karna hota hai, jo batata hai ki har data point ke kitne nazdeeki
padosiyon ko consider kiya jayega. Ye parameter aapke specific
problem aur dataset pe depend karta hai.

3)Distance Metric: Nazdeeki padosiyon ko calculate karne ke


liye distance metric, jaise Euclidean distance, Manhattan distance, ya
cosine similarity, ko choose karna hota hai. Distance metric, data
points ke similarity ko measure karne mein madad karta hai.

4)Prediction Process: KNN classifier mein, har new,


unlabeled data point ke liye K nazdeeki padosiyon ko select kiya jata
hai. In padosiyon ke classes ko dekhte hue, majority voting ya
weighted voting ke zariye new data point ka class assign kiya jata
hai.
3) Computing similarity
scores:

"Computing similarity scores" ka matlab hota hai "mamlaat ke


similarity scores ko calculate karna". Is topic mein hum baat karenge
recommendation system banane ke liye, dataset mein maujood
alag-alag objects ko kaise compare kiya jaye. Agar dataset logon aur
unke alag-alag movie preferences se milta hai, toh humein
recommendation dene ke liye samajhna zaruri hai ki hum kaise do
logon ko ek dusre se compare karenge. Isme similarity score ka
mahatva hota hai. Similarity score ek andaaza deta hai ki do data
points kitne similar hain.

Is domain mein do scores frequently istemal hote hain - Euclidean


score aur Pearson score. Euclidean score mein, do data points ke
beech ki Euclidean distance ko compute karne ke liye istemal kiya
jata hai.

Euclidean distance ki value unbounded ho sakti hai. Isliye hum is


value ko aise convert karte hain ki Euclidean score 0 se 1 tak ka
range ho. Agar do objects ke beech ki Euclidean distance badi hoti
hai, toh Euclidean score kam hona chahiye kyunki kam score yeh
batata hai ki objects similar nahi hai.

Pearson score do data points ke beech correlation ka ek measure


hota hai. Ismein, dono data points ke covariance ke saath unke
standard deviations ka istemal kiya jata hai. Score -1 se +1 tak range
ho sakta hai. +1 ka score yeh batata hai ki data points similar hain,
aur -1 ka score yeh batata hai ki woh alag hain. 0 ka score yeh batata
hai ki unke beech koi taaluuq nahi hai.

4) Finding similar users using


collaborative filtering:
"Finding similar users using collaborative filtering" ka matlab
hota hai "Collaborative filtering ka istemal karke similar users ko
dhundhna". Is topic mein hum baat karenge collaborative filtering
technique ke zariye similar users ko kaise khojte hain.

Collaborative filtering ek recommendation system technique hai,


jahan hum logon ke interest aur preferences ka istemal karke
unko similar users aur unki pasandeeda cheezein khojte hain. Is
technique ko istemal karke, hum ek user ke pasandeeda items ko
dusre users ke pasandeeda items ki bunyad par suggest kar sakte
hain.
Similar users khojne ke liye, kuch popular algorithms aur
techniques istemal kiye jate hain, jaise ki:

1)User-Based Collaborative Filtering: User-based


collaborative filtering mein, hum ek user ke pasandeeda items ko
uske nazdeeki users ke pasandeeda items se compare karte hain.
Hum un users ko khojte hain jo usse jyada similar items pasand
karte hain.

2)Item-Based Collaborative Filtering: Item-based


collaborative filtering mein, hum ek item ke pasandeeda users ko
dhoondhte hain. Hum item-item similarity ko calculate karke
similar items ko khojte hain aur unke pasandeeda users ko
suggest karte hain.

Ye techniques collaborative filtering ka istemal karke similar


users ko khojne mein madad karte hain. Is tareeke se, hum users
ke interests aur preferences ko samajh sakte hain aur unko similar
users ke recommendations provide kar sakte hain.
5) Building a movie recommendation

System:
Ab tak humne apne recommendation system ko banane ke liye
foundation rakha hai aur humne in topics ke baare mein seekha
hai:

 Extracting the nearest neighbors


• Building a K-nearest neighbors classifier
• Computing similarity scores
• Finding similar users using collaborative filtering
Ab jab sabhi building blocks tayyar hain, toh ab hum ek movie
recommendation system banane ka waqt hai. Humne un sabhi
moolbhut concepts ko seekha hai jo ek recommendation system
banane ke liye zaroori hote hain. Is section mein, hum ratings.json
file mein di gayi data ke bunyad par ek movie recommendation
system banayenge. Yeh file ek set of logon aur unke alag-alag
movies ke ratings ko contain karti hai. Kisi diye gaye user ke liye
movie recommendations dhoondhne ke liye, humein dataset mein
similar users dhoondhne honge aur phir us shaks ke liye
recommendations tayyar karne honge.

Is tareeke se, hum ek personalized movie recommendation system


banayenge jisme users ke pasandeeda movies ke bunyad par
recommendations di jayengi.
Steps:
Imp note:(Book me coding likhi hui ha but me sirf
English me steps likhy lgaa hun ok)

Lets start:
 Create a new Python file
 Define a function to parse the input arguments. The input
argument is the name of the user.
 Define a function to get the movie recommendations for a
given user. If the user doesn't exist in the dataset, the code
will raise an error.
 Define the variables to track the scores.
 Compute a similarity score between the input user and all the other
users in the dataset.
 If the similarity score is less than 0, you can continue with the next
user in the dataset.
 Extract a list of movies that have been rated by the current user but
haven't been rated by the user.
 For each item in the filtered list, keep a track of the weighted rating
based on the similarity score. Also keep a track of the similarity
scores.
 If there are no such movies, then we cannot recommend anything.
 Normalize the scores based on the weighted scores.
 Sort the scores and extract the movie recommendations.
 Define the main function and parse the input arguments to extract
the name of the input user.
 Load the movie ratings data from the file ratings.json.
 Extract the movie recommendations and print the output.
So ye steps thy iske bad movie recommender
system tyar hu jay gaa
Chapter 8 END

Chapter 9: Logic
Programming

Topics:

1) What is logic
programming?
Logic programming logic programming ek programming paradigm hai
jo formal logic aur logical karane par mushtamil hota hai. Isme,
programs ko rules aur facts ka istemal karke banaya jata hai jisse rishte
aur sharton ko dikhaya jata hai. Logic programming ma set of logical
rules ko define karna hai aur phir program ko query karke solutions ya
naye information ko hasil karna hai.

Logic programming mein dhyan "kya" karana par hota hai, "kaise"
karana nahi. Programs ko logical aur rishton ke bunyad se dekha jata hai
aur logic programming language, execution aur inference process ka
dhyan rakhta hai.

Logic programming languages, jaise ki Prolog, facts aur rules ko define


karne aur unke khilaf query perform karne ka ek tarika paida karte hain.
In languages mein, pichhe se chain karke logical mechanism ka istemal
hota hai, jahan queries ko rules aur facts ke khilaf milaya jata hai
solutions ko khojne ke liye.

2) Understanding the
building blocks of logic
programming:
Programming mein, object-oriented ya imperative paradigms mein, ek
variable ko hamesha define karna zaroori hota hai. logic programming
mein, cheezein thoda alag tareeke se kaam karti hain. Ek argument ko
ek function mein pass kiya ja sakta hai aur interpreter user ki madad se
define kiye gaye facts ko dekh kar ye variables ko instantiate karega. Ye
variable matching problem ko hal karne ka ek behtren tarika hai.
Variables ko alag-alag items ke saath match karne ka yeh process
unification kehlata hai. Yeh Logic programming mein ek alag tareeka
hai. logic programming mein, relations bhi define ki ja sakti hain.
Relations facts aur rules ke roop mein define kiye jate hain.

Facts sirf program aur data ke bare mein sahi baatein hoti hain. Unka
syntax seedha hota hai. For example, "Donald Allan ka beta hai" ek fact
hai, jabki "Allan ka beta kaun hai?" ek fact nahi hai. Har logical program
ko facts ki zaroorat hoti hai taaki uske bunyad par diye gaye swal ka
answer de sky

Rules wo cheezein hain jo humne seekhi hai, jaise ki alag-alag facts ko


hasil karna aur unko query karna. Ye sharton hote hain jo poore hone
chahiye aur ye aapko ke domain ke baare mein nateeje nikalne mein
madad karte hain. For example, maan lijiye aap ek chess engine banane
par kaam kar rahe hain. Aapko sabhi rules ko specify karna hoga ki kis
tarah har piece chessboard par move kar sakta hai.

3) Solving problems
using logic
Programming:

 Ye topic me coding ha ye 1
bar book se prh li jie gaa
laikin mujhe ye important
nahi lg rha phr bhi apne tor
pe 1 bar prh li jie gaa book
se.
Thanks

4) Validating
primes:
Primes ko Accept krna (Validating Primes). Is topic mein hum
dekhenge ki hum kis tarah logic programming ke saath "primes"
ko accept kar sakte hain aur unko verify kar sakte hain.

Primes accept karne ke liye hum logic programming ka istemal kar


sakte hain, jahaan hum masle ko logic rules aur facts ke roop mein
hasil karte hain.

Let's see how to use logic programming to check for prime


numbers. We will use the
constructs available in logpy to determine which numbers in the
given list are prime,
as well as finding out if a given number is a prime or not.
Steps:
 Create a new Python file
 Next, define a function that checks if the given number is prime
depending on the type of data. If it's a number, then it's
straightforward. If it's a variable, then we must run the sequential
operation. To give a bit of background, the method conde is a goal
constructor that provides logical AND and OR operations.

 The method condenser is like conde, but it supports the generic


iteration of goals.Define a set of numbers and check which
numbers are prime. The method member checks if a given
number is a member of the list of numbers specified in the input
Argument.
 Let's use the function in a slightly different way now by printing
the first 7 prime numbers.
 The full code is given in prime.py. If you run the code, you will
see the following output.

5) Parsing a family
tree:
Ye topic book pe bht easy English me diaa giaa ha or wahan se krna
ha inshallah easily smjh ajay gi because bht simple English ha book
me or iska page number ha=209page number
Ye page number book ka ha slide ka nahi ok.
Next topics 6)Analysing Geography .
7)building puzzle solver .

In donu topics ma coding ha sir ne sirf kaha tha ke jo code likha hua
ha osko run krna bs yehi kaha ke isko apny laptop me run krein
code ko dekh ke. Mazeed agr is topic ko dekhna ha to ap book ke
page no.214 pr dekh skty hain.

END
Chapter 10:Heuristic
Search Techniques

Topics:
1) Is heuristic search
artificial intelligence?
Han, heuristic search artificial intelligence ka hissa hai. Yeh technique
AI mein istemal hoti hai problem solving mein mufeed faislon par hal
talaash karne ke liye. Heuristic search, heuristics ka istemal karke search
space ko tezi se tafteesh karne mein madad karta hai.

2) What is heuristic
search?
Data ki talaash aur tarteeb ek ahem topic hai artificial intelligence mein.
Kai masail aise hote hain jin mein humein hal talaash karna hota hai. Ek
masle ke liye kai possible hal hote hain aur humein yeh nahi pata hota ke
kaun se sahi hain. Data ko behtrin tareeqe se tarteeb diye jaane se hum
tafteesh ko jald aur asar se kar sakte hain.

Aksar, ek masle ka hal talaash karne ke liye itne saare moujood tareeqe
hote hain ke ek behtareen hal talaash karne ke liye koi aik algorithm
tayyar nahi kiya ja sakta. Saath hi, har hal ko check karna bhi mumkin
nahi hai kyunki yeh bohat mehnga ho sakta hai. Aise halat mein hum ek
rule of thumb ka istemal karte hain jo humein options ko chunne mein
madad karta hai aur woh wazeh tor par galat hain unko khatam kar deta
hai. Is rule of thumb ko heuristic kehte hain. Heuristic search ko uss
tareeqe ko kehte hain jisme hum heuristics ka istemal karke talaash ko
guide karte hain.

Heuristic techniques taqatwar hote hain kyunki woh talaash ka process


tez kar dete hain. Agar heuristics kuch options ko khatam nahi kar sakte,
toh woh options ko behtar tarteeb dete hain taki behtareen hal pehle
aaye. Jaisa pehle bhi zikr kiya gaya hai, heuristic searches tafteesh ke
liye computer ki nazdiki lete hain. Ab hum sikheinge ke hum kaise
shortcuts le sakte hain aur tafteesh ke tree ko prune kar sakte hain.

3) Uninformed versus
informed search:
Agla topic hai "Uninformed versus informed search" jismein
bataya gaya hai ke kaise uninformed (bina jankari wale) talaash
aur informed (jankari wale) talaash ek dusre se alag hote hain.
Uninformed search, jaise ke breadth-first search aur depth-first
search, mein humein kisi bhi node ki jankari nahi hoti, bas hum
unhein explore karte jaate hain. Iske khilaaf, informed search,
jaise ke A* (A-star) search, humein node ke baare mein kuch
jankari hoti hai jo humein sahi raaste ki taraf le jaati hai.

Uninformed search mein, hum kisi bhi node ki qeemat ya


priority ke baare mein kuch jankari nahi rakhte hain. Hum bas
uss node ke saath chalte jaate hain jo humare tafteeshi criteria ko
pura karta hai. Depth-first search mein hum ek path ko poori
tarah explore karte hain pehle, phir dusre path par jaate hain.
Breadth-first search mein hum ek level ko poori tarah explore
karte hain pehle, phir agle level par jaate hain.

Informed search mein, hum kisi node ke baare mein jankari


rakhte hain jaise ke uski qeemat, uski heuristic value, ya uski
distance goal state tak. A* search ek aisa informed search
algorithm hai jo cost aur heuristic information ka combination
istemal karta hai. Ismein humein har node ki priority ka andaza
hota hai aur hum woh node explore karte hain jo priority ke
hisaab se sab se behtar lagta hai.

Overall, uninformed aur informed search dono apne tareeqe aur


tafteeshi capabilities mein alag hote hain. Uninformed search
jankari ke baghair chalta hai jabke informed search jankari ka
istemal karke tafteesh ko guide karta hai. Dono ke apne faide aur
limitations hote hain aur hum unko hal talaash ke tareeqon ke
hisaab se istemal karte hain.
Mujhe batayein agar aapke koi aur sawal hai, toh main aapki
madad karne ke liye tayyar hoon.

4) Constraint
satisfaction problems
Tasawwur kijiye ke kuch aise masle hote hain jo sharton ke tahat hal
karna zaruri hota hai. Ye shartein asal mein aisi hain jo masle ko hal
karne ke doran toorna mumkin nahi. In masle ko hum Constraint
Satisfaction Problems (CSPs) kehte hain.

Tasawwur hasil karne ke liye, chaliye hum ek Sudoku puzzle ka ek hissa


dekhte hain. Sudoku ek aisa khel hai jahan hum kisi bhi horizontal line,
vertical line ya ek square ke andar ek hi number dubara istemal nahi kar
sakte. Yahan ek Sudoku board ka use hai.

CSP aur Sudoku ke qawaid ka istemal karke hum jaldi se yeh tay kar
sakte hain ke puzzle ko hal karne ke liye konse numbers ko koshish
karenge aur konse numbers ko koshish nahi karenge. Upar di gayi
Sudoku board ki missal(ye picture page 223 pe h) se samjha sakte hain
ke humein kis square ke liye kis number ka istemal karna chahiye.

Agar hum CSP ka istemal na karte toh hume brute force approach ka
istemal karna padta, jahan hum sabhi squares mein number 1 se start
karke sabhi combinations ko try karte aur fir uska result check karte.
Lekin CSP ka istemal karke hum pehle se hi koshishon ko kam kar sakte
hain.

Maslay mein highlight kiye gaye square ke liye soch kar chalte hain.
Hum jante hain ke is square mein number 1, 6, 8 ya 9 istemal nahi kar
sakte kyunki woh numbers pehle se maujood hain. Hum jante hain ke
yeh number horizontal line mein 2 aur 7 pe bhi maujood nahi ho sakte
aur vertical line mein 3 aur 4 bhi maujood hain. Isse humare paas ek hi
possibility bachti hai ke is square ke liye number kya hona chahiye – 5.

CSPs bary masle hote hain jinhein kuch variables ke set ke roop mein
tay karna zaruri hota hai. Jab hum final hal tak pahunchte hain, tab
variables ko sabhi sharton ka palan karna hota hai. Ye takneek masle ke
entities ko variables par tay sharton ki ek collection ke roop mein hai. In
variables ko hal karne ke liye constraint satisfaction methods ka istemal
karna hota hai.

In maslon ko hal nikalne ke liye humein heuristics aur dusre talaash


takneekon ka sahi istemal karna hota hai taaki time ke mutabik masle hal
ho saken.
5) Local search
techniques
Local search ek tareeka hai CSP ka hal nikalne ka. Ismein hamein
solution ko optimize karte hue har constraints ko pura karna hota hai.
Ismein ham variables ko baar-baar update karte hai jab tak hum manzil
tak pahunch na jayein. Ye algorithms har kadam par value ko modify
karte hain jisse hum manzil ke qareeb pahunchte hain. Hal space mein,
updated value pichle value se manzil ke qareeb hota hai. Isliye ise local
search kehte hain.

A local search algorithm is a type of heuristic search algorithm. These


algorithms use a function that calculates the quality of each update. For
example, it can count the number of constraints that are being violated
by the current update or it can see how the update affects the distance to
the goal. This is referred to as the cost of the assignment. The overall
goal of local search is to find the minimum cost update at each step.

Hill climbing is a popular local search technique. It uses a heuristic


function that measures the difference between the current state and the
goal. When we start, it checks if the state is the final goal. If it is, then it
stops. If not, then it selects an update and generates a new state. If it's
closer to the goal than the current state, then it makes that the current
state. If not, it ignores it and continues the process until it
checks all possible updates. It basically climbs the hill until it reaches
the summit.
6) Simulated
annealing
Simulated annealing is a type of local search, as well as a stochastic
search technique. Stochastic search techniques are used extensively in
various fields, such as robotics, chemistry, manufacturing, medicine, and
economics. Stochastic algorithms are used to solve many real-world
problems: we can perform things like optimizing the design of a robot,
determining the timing strategies for automated control in factories, and
planning traffic.

Simulated annealing ek search algorithm hai jo artificial intelligence


mein istemal hota hai masail ka hal nikalne ke liye. Iska naam
metallurgy mein annealing process se liya gaya hai, jahan ek material ko
garam kiya jata hai aur dheere-dheere thanda karke ek behtar halat tak
pohanchaya jata hai. Simulated annealing mein, algorithm ek shuruati
halat se shuru karta hai aur chote-chote tokry karke padosi halaton ko
explore karta hai. Ye tukry qabool ya rad karte hain jaisa keh cooling
schedule tay karta hai, jo bure halaton ko qubool karne ki ihtimal ko
control karta hai. Is se algorithm local optima se bach kar global optima
ki taraf tajawuz kar sakta hai. Simulated annealing mein ek heuristic
evaluation function istemal hoti hai jo search ko guide karti hai aur ek
cost function ko kam karna ka maqsad rakhti hai. Temperature ya
cooling rate ko dheere-dheere kam karke, algorithm exploration aur
exploitation ko achhi halat mein rakhta hai aur ek behtareen halat tak
pohanchne ki taraf tawanai se kam karta hai. Simulated annealing khaas
tor par us waqt kaam aata hai jab search space bada aur mushkil ho, aur
ek sahi halat ko munasib waqt mein hasil karna mushkil ho.
Yahan tak hi krny hein topics jo next topics hain wo bs coding ha

THE END

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy