0% found this document useful (0 votes)
2 views4 pages

Ai Advanced - Unit - 3

The document discusses evaluation metrics for classification models, focusing on Precision and Recall, which measure the accuracy of positive predictions and the ability to capture actual positives, respectively. It also explains K-Nearest Neighbors (KNN) and Support Vector Machine (SVM) algorithms, highlighting their functionalities, use cases, and types. Additionally, the document covers the ROC Curve, a graphical representation used to evaluate the performance of classification models based on false positive and true positive rates.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views4 pages

Ai Advanced - Unit - 3

The document discusses evaluation metrics for classification models, focusing on Precision and Recall, which measure the accuracy of positive predictions and the ability to capture actual positives, respectively. It also explains K-Nearest Neighbors (KNN) and Support Vector Machine (SVM) algorithms, highlighting their functionalities, use cases, and types. Additionally, the document covers the ROC Curve, a graphical representation used to evaluate the performance of classification models based on false positive and true positive rates.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 4

Ai Advanced - Unit - 3 :

🎯 Precision & Recall – Evaluation Metrics (in Classification)


🎯 Precision – 5 Line Explanation
1.​ Precision batata hai ki model ne jo Positive predict kiye, unme se kitne sahi
the.​

2.​ Ye check karta hai ki model ne galat positive kitne diye (False Positive kam
hone chahiye).​

3.​ Formula hota hai: Precision = TP / (TP + FP)​

4.​ Jab galat positive prediction se problem ho (jaise spam filter), tab Precision
important hota hai.​

5.​ High Precision = Kam galat positive = More reliable positive predictions.​

🎯 Recall – 5 Line Explanation


1.​ Recall batata hai ki jitne actual Positive the, unme se kitne model ne sahi
pakde.​

2.​ Ye measure karta hai ki model ne kitne positive miss kiye (False
Negatives).​

3.​ Formula hota hai: Recall = TP / (TP + FN)​

4.​ Jab har positive case ko pakadna zaroori ho (jaise cancer detection), tab
Recall important hota hai.​

5.​ High Recall = Kam miss hone wale positives = Better detection rate.

📍 KNN (K-Nearest Neighbors) – Definition & Explanation ?


1.​
2.​ KNN ek supervised learning algorithm hai jo mainly classification ke liye
use hota hai.​
3.​ Naye data point ka output decide hota hai uske nearby K data points
(neighbors) ke basis par.​

4.​ Ye distance measure karta hai (like Euclidean distance) training data ke har
point se.​

5.​ Fir top K closest neighbors me se majority class ko choose karta hai.​

6.​ KNN lazy learner hai – training time pe kuch nahi seekhta, prediction ke waqt
hi calculation karta hai.​

7.​ Simple algorithm hai, lekin large datasets me slow ho sakta hai.​

8.​ Best use hota hai image recognition, recommendation systems, aur pattern
matching me.

🔥 Features:
●​ Simple and intuitive​

●​ Slow for large data (kyunki har baar distance calculate hoti hai)​

●​ Best for smaller datasets

🤖 SVM (Support Vector Machine) – 7 Line Explanation


1.​ SVM bhi ek supervised learning algorithm hai, mainly for binary
classification.​

2.​ Ye algorithm classes ko alag karne ke liye ek best boundary line


(hyperplane) find karta hai.​

3.​ Ye line aise banai jati hai ki dono classes ke beech ka margin (distance)
maximum ho.​

4.​ Support vectors wo data points hote hain jo margin ke sabse kareeb hote
hain.​

5.​ Jab data linearly separable na ho, tab SVM kernel trick use karta hai (like
RBF, Polynomial).​
6.​ SVM high-dimensional data me bhi achha perform karta hai.​

7.​ Use cases me include hain: face detection, bioinformatics, text classification,
fraud detection.

📊 Example:
Tumhare paas 2 categories hain: Cats aur Dogs.​
SVM unke features (like height & weight) ke basis pe ek straight line (ya curve)
banayega jo dono ko clearly alag kare.

✅ Goal:
●​ Classes ko is tarah separate karna ki margin (distance) dono classes se
maximum ho.​

●​ Ye margin banate hain support vectors – yani jo points boundary ke bilkul


kareeb hote hain.

📚 Types of SVM:
1.​ Linear SVM:​

○​ Jab data linearly separable ho (ek straight line se alag ho sakta ho).​

○​ Fast & simple​

2.​ Non-linear SVM (with Kernel Trick):​

○​ Jab data curved ya complex ho​

○​ SVM "kernel functions" use karta hai (like RBF, Polynomial) taaki data
ko higher dimension me map kare aur linearly separate kar sake.

📈 ROC Curve – (Receiver Operating Characteristic Curve)


1.​ ROC Curve ek graph hai jo classification model ki performance ko evaluate
karne ke liye use hota hai.​
2.​ Is graph me X-axis hoti hai: False Positive Rate (FPR)​
Aur Y-axis hoti hai: True Positive Rate (TPR) ya Recall​

3.​ Ye curve dikhata hai ki model different threshold values pe kitna sahi kaam
kar raha hai.​

4.​ Agar curve top-left corner ke kareeb hoti hai, to model achha perform kar
raha hai.​

5.​ AUC (Area Under Curve) value jitni zyada (closer to 1), model utna hi
accurate hai.​

6.​ ROC helpful hoti hai imbalanced datasets ke liye – jaise fraud detection,
cancer diagnosis.​

7.​ Perfect model ka ROC curve top left tak jata hai, jiska AUC = 1 hota hai.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy