ai ml unit 3
ai ml unit 3
Introduction to Classification
o c) Clustering data
o d) Reducing dimensionality
o a) K-Means ✅
o c) Decision Tree
o d) Naïve Bayes
o a) The ratio of true positives to the sum of true positives and false positives ✅
o b) Parametric algorithm
o a) Using cross-validation ✅
o b) Random selection
o d) Increasing K indefinitely
a) Euclidean distance ✅
b) Cosine similarity
c) Manhattan distance
d) Jaccard distance
d) Using backpropagation
a) ID3 ✅
b) K-Means
c) Gradient Boosting
d) PCA
a) Bayes' theorem ✅
b) Pythagorean theorem
c) Markov property
d) Euclidean distance
23. What kernel function is commonly used in SVM for non-linear classification?
b) Linear kernel
c) Manhattan kernel
d) Euclidean kernel
a) C (Regularization parameter) ✅
b) K (Neighbors)
c) Entropy
d) Tree depth
c) Reducing dimensionality
d) Clustering data
a) y = mx + b ✅
b) y = a + bx²
c) y = x / 2
d) y = log(x)
31. Which evaluation metric is most appropriate for imbalanced classification problems?
a) F1-score ✅
b) Accuracy
d) R-squared
a) Naïve Bayes ✅
b) Decision Tree
c) KNN
d) SVM
a) O(n) ✅
b) O(log n)
c) O(1)
d) O(n²)
a) Jaccard similarity ✅
b) Euclidean distance
c) Manhattan distance
d) Minkowski distance
a) Scikit-learn ✅
b) TensorFlow
c) PyTorch
d) Pandas
b) R-squared
c) Euclidean distance
d) Cosine similarity
43. How does a random forest improve over a single decision tree?
48. Which type of Naïve Bayes classifier is best suited for text classification?
d) Logistic Regression
a) Supervised learning ✅
b) Unsupervised learning
c) Reinforcement learning
d) Semi-supervised learning
c) It reduces overfitting
d) It improves generalization
c) By entropy
d) By a hyperplane
b) It increases overfitting
d) It reduces bias
a) KD-trees ✅
b) Increasing dataset size
c) Ignoring distances
b) Always 5
c) Always 1
c) By stopping training
d) Logistic Regression
c) By ignoring it
70. What happens if Naïve Bayes encounters a zero probability for a category?
c) It stops predicting