Machine Learning Project Basic - Linear Regression - Kaggle
Machine Learning Project Basic - Linear Regression - Kaggle
Machine Learning Project Basic - Linear Regression - Kaggle
We use cookies on kaggle to deliver our services, analyze web traffic, and improve your experience on the site. By using kaggle, you agree to our use of
Got it Learn more
cookies.
https://www.kaggle.com/punit0811/machine-learning-project-basic-linear-regression 1/10
10/17/2019 Machine Learning Project Basic - Linear Regression | Kaggle
In [1]:
# This Python 3 environment comes with many helpful analytics libraries installed
# It is defined by the kaggle/python docker image: https://github.com/kaggle/docker-python
# For example, here's several helpful packages to load in
import os
print(os.listdir("../input"))
# Any results you write to the current directory are saved as output.
['Ecommerce Customers']
In [2]:
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
We'll work with the Ecommerce Customers csv file from the company. It has Customer info, such as Email, Address, and their color Avatar. Then
it also has numerical value columns:
Check the head of customers, and check out its info() and describe() methods.
In [3]:
customers = pd.read_csv('../input/Ecommerce Customers')
In [4]:
customers.head()
Out[4]:
Avg. Yearly
Time on Time on Length of
Email Address Avatar Session Amount
App Website Membership
Length Spent
835 Frank
0 mstephenson@fernandez.com Tunnel\nWrightmouth, Violet 34.497268 12.655651 39.577668 4.082621 587.951
MI 82180-9605
4547 Archer
1 hduke@hotmail.com Common\nDiazchester, DarkGreen 31.926272 11.109461 37.268959 2.664034 392.204
CA 06566-8576
24645 Valerie Unions
Suite
2 pallen@yahoo.com Bisque 33.000915 11.330278 37.110597 4.104543 487.547
582\nCobbborough,
D...
1414 David
3 riverarebecca@gmail.com Throughway\nPort SaddleBrown 34.305557 13.717514 36.721283 3.120179 581.852
Jason, OH 22070-1220
14023 Rodriguez
mstephens@davidson-
4 Passage\nPort MediumAquaMarine 33.330673 12.795189 37.536653 4.446308 599.406
herman.com
Jacobville, PR 3...
https://www.kaggle.com/punit0811/machine-learning-project-basic-linear-regression 2/10
10/17/2019 Machine Learning Project Basic - Linear Regression | Kaggle
In [5]:
customers.describe()
Out[5]:
Avg. Session Length Time on App Time on Website Length of Membership Yearly Amount Spent
count 500.000000 500.000000 500.000000 500.000000 500.000000
mean 33.053194 12.052488 37.060445 3.533462 499.314038
std 0.992563 0.994216 1.010489 0.999278 79.314782
min 29.532429 8.508152 33.913847 0.269901 256.670582
25% 32.341822 11.388153 36.349257 2.930450 445.038277
50% 33.082008 11.983231 37.069367 3.533975 498.887875
75% 33.711985 12.753850 37.716432 4.126502 549.313828
max 36.139662 15.126994 40.005182 6.922689 765.518462
In [6]:
customers.info()
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 500 entries, 0 to 499
Data columns (total 8 columns):
Email 500 non-null object
Address 500 non-null object
Avatar 500 non-null object
Avg. Session Length 500 non-null float64
Time on App 500 non-null float64
Time on Website 500 non-null float64
Length of Membership 500 non-null float64
Yearly Amount Spent 500 non-null float64
dtypes: float64(5), object(3)
memory usage: 31.3+ KB
In [7]:
sns.set_palette("GnBu_d")
sns.set_style('whitegrid')
For the rest of the exercise we'll only be using the numerical data of the csv file.
Use seaborn to create a jointplot to compare the Time on Website and Yearly Amount Spent columns. Does the correlation make
sense?
In [8]:
sns.jointplot(x='Time on Website',y='Yearly Amount Spent',data=customers)
Out[8]:
<seaborn.axisgrid.JointGrid at 0x7f2b0659b9e8>
https://www.kaggle.com/punit0811/machine-learning-project-basic-linear-regression 3/10
10/17/2019 Machine Learning Project Basic - Linear Regression | Kaggle
In [9]:
sns.jointplot(x='Time on App',y='Yearly Amount Spent',data=customers)
Out[9]:
<seaborn.axisgrid.JointGrid at 0x7f2b02492828>
Use jointplot to create a 2D hex bin plot comparing Time on App and Length of Membership.
In [10]:
sns.jointplot(x='Time on App',y='Length of Membership',kind="hex",data=customers)
Out[10]:
<seaborn.axisgrid.JointGrid at 0x7f2b3b8bf240>
https://www.kaggle.com/punit0811/machine-learning-project-basic-linear-regression 4/10
10/17/2019 Machine Learning Project Basic - Linear Regression | Kaggle
**Let's explore these types of relationships across the entire data set. Use pairplot
(https://stanford.edu/~mwaskom/software/seaborn/tutorial/axis_grids.html#plotting-pairwise-relationships-with-pairgrid-and-pairplot) to recreate
the plot below
In [11]:
sns.pairplot(customers)
Out[11]:
<seaborn.axisgrid.PairGrid at 0x7f2b02210390>
https://www.kaggle.com/punit0811/machine-learning-project-basic-linear-regression 5/10
10/17/2019 Machine Learning Project Basic - Linear Regression | Kaggle
Based off this plot what looks to be the most correlated feature with Yearly Amount Spent?
In [12]:
# Length of Membership
Create a linear model plot (using seaborn's lmplot) of Yearly Amount Spent vs. Length of Membership.
In [13]:
sns.lmplot(x='Length of Membership',y='Yearly Amount Spent',data=customers)
Out[13]:
<seaborn.axisgrid.FacetGrid at 0x7f2b017359b0>
Now that we've explored the data a bit, let's go ahead and split the data into training and testing sets. Set a variable X equal to the numerical
features of the customers and a variable y equal to the "Yearly Amount Spent" column.
In [14]:
X = customers[['Avg. Session Length','Time on App','Time on Website','Length of Membership']]
In [15]:
y = customers['Yearly Amount Spent']
Use model_selection.train_test_split from sklearn to split the data into training and testing sets. Set test_size=0.3 and
random_state=101
In [16]:
from sklearn.model_selection import train_test_split
In [17]:
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=101)
https://www.kaggle.com/punit0811/machine-learning-project-basic-linear-regression 6/10
10/17/2019 Machine Learning Project Basic - Linear Regression | Kaggle
In [18]:
from sklearn.linear_model import LinearRegression
In [19]:
lm = LinearRegression()
In [20]:
lm.fit(X_train,y_train)
Out[20]:
Machine Learning Project Basic - Linear Regression
3 Copy and Edit 33
LinearRegression(copy_X=True,
Python notebook using fit_intercept=True,
data from Ecommerce Customer Device Usage · 2,402 views · 1y agon_jobs=None,
normalize=False)
Version 2
2 commits Print out the coefficients of the model
In [21]:
lm.coef_
Out[21]:
array([25.98154972, 38.59015875, 0.19040528, 61.27909654])
Now that we have fit our model, let's evaluate its performance by predicting off the test values!
In [22]:
predictions = lm.predict(X_test)
Create a scatterplot of the real test values versus the predicted values.
In [23]:
plt.scatter(y_test,predictions)
plt.xlabel('Y Test')
plt.ylabel('Predicted Y')
Out[23]:
Text(0,0.5,'Predicted Y')
https://www.kaggle.com/punit0811/machine-learning-project-basic-linear-regression 7/10
10/17/2019 Machine Learning Project Basic - Linear Regression | Kaggle
In [24]:
from sklearn import metrics
Let's evaluate our model performance by calculating the residual sum of squares and the explained variance score (R^2).
Calculate the Mean Absolute Error, Mean Squared Error, and the Root Mean Squared Error. Refer to the lecture or to Wikipedia for the
formulas
In [25]:
print('MAE :'," ", metrics.mean_absolute_error(y_test,predictions))
print('MSE :'," ", metrics.mean_squared_error(y_test,predictions))
print('RMAE :'," ", np.sqrt(metrics.mean_squared_error(y_test,predictions)))
Notebook
Data
Comments
MAE : 7.2281486534308295
MSE : 79.8130516509743
RMAE : 8.933815066978626
Residuals
You should have gotten a very good model with a good fit. Let's quickly explore the residuals to make sure everything was okay with our data.
Plot a histogram of the residuals and make sure it looks normally distributed. Use either seaborn distplot, or just plt.hist().
In [26]:
sns.distplot(y_test - predictions,bins=50)
Out[26]:
<matplotlib.axes._subplots.AxesSubplot at 0x7f2af7704358>
Conclusion
We still want to figure out the answer to the original question, do we focus our efforst on mobile app or website development? Or maybe that
https://www.kaggle.com/punit0811/machine-learning-project-basic-linear-regression 8/10
10/17/2019 Machine Learning Project Basic - Linear Regression | Kaggle
doesn't even really matter, and Membership Time is what is really important. Let's see if we can interpret the coefficients at all to get an idea.
In [27]:
coeffecients = pd.DataFrame(lm.coef_,X.columns)
coeffecients.columns = ['Coeffecient']
coeffecients
Out[27]:
Coeffecient
Avg. Session Length 25.981550
Time on App 38.590159
Time on Website 0.190405
Length of Membership 61.279097
Holding all other features fixed, a 1 unit increase in Avg. Session Length is associated with an increase of 25.98 total dollars spent.
Holding all other features fixed, a 1 unit increase in Time on App is associated with an increase of 38.59 total dollars spent.
Holding all other features fixed, a 1 unit increase in Time on Website is associated with an increase of 0.19 total dollars spent.
Holding all other features fixed, a 1 unit increase in Length of Membership is associated with an increase of 61.27 total dollars spent.
Do you think the company should focus more on their mobile app or on their website?
This is tricky, there are two ways to think about this: Develop the Website to catch up to the performance of the mobile app, or develop the app
more since that is what is working better. This sort of answer really depends on the other factors going on at the company, you would probably
want to explore the relationship between Length of Membership and the App or the Website before coming to a conclusion!
This kernel has been released under the Apache 2.0 open source license.
Data
Ecommerce Customers
About this Dataset
No description yet
Comments (0)
https://www.kaggle.com/punit0811/machine-learning-project-basic-linear-regression 9/10
10/17/2019 Machine Learning Project Basic - Linear Regression | Kaggle
https://www.kaggle.com/punit0811/machine-learning-project-basic-linear-regression 10/10