Machine_Learning _Assignment_3
Machine_Learning _Assignment_3
Assignment 3: Classification
Analysis
Group Members:
Abeeha Abid (F2021065300)
Esha Tur Razia (F2021065348)
Eman Azam (F2021065253)
2024-2025
Task 1
Working Principles: Predicts class by majority vote among the k-nearest neighbors
in the feature space.
Assumptions: Nearby points have similar labels; requires good distance metrics.
Strengths: Easy to implement, no training phase.
Weaknesses: Sensitive to irrelevant features and feature scaling, slow for large
datasets.
Training Efficiency: No training needed.
Computational Efficiency: Low during prediction, especially with large datasets.
Working Principles: Mimics the human brain with interconnected layers of neurons;
learns features and patterns using backpropagation.
Assumptions: Requires large amounts of labeled data, assumes enough computational
resources.
Strengths: Very powerful for complex, non-linear problems.
Weaknesses: Requires significant computational power and training time, prone to
overfitting.
Training Efficiency: Low for large networks.
Computational Efficiency: Relatively low; depends on the size of the network.
Task: 2
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LinearRegression
from sklearn.metrics import mean_absolute_error, mean_squared_error,
r2_score
data = pd.read_csv('data.csv')
print(data.columns)
model = LinearRegression()
model.fit(X_train, y_train)
plt.scatter(y_test, y_pred)
plt.xlabel('Actual Income')
plt.ylabel('Predicted Income')
plt.title('Actual vs Predicted Income')
plt.plot([min(y_test), max(y_test)], [min(y_test), max(y_test)],
color='red') # Diagonal line for reference
plt.show()
Task: 3
https://chatgpt.com/