Another Project-Creating Customer Segments
Another Project-Creating Customer Segments
Unsupervised Learning
Welcome to the third project of the Machine Learning Engineer Nanodegree! In this notebook, some
template code has already been provided for you, and it will be your job to implement the additional
functionality necessary to successfully complete this project. Sections that begin
with 'Implementation' in the header indicate that the following block of code will require additional
functionality which you must provide. Instructions will be provided for each section and the specifics of
the implementation are marked in the code block with a 'TODO' statement. Please be sure to read the
instructions carefully!
In addition to implementing code, there will be questions that you must answer which relate to the
project and your implementation. Each section where you will answer a question is preceded by
a 'Question X' header. Carefully read each question and provide thorough answers in the following text
boxes that begin with 'Answer:'. Your project submission will be evaluated based on your answers to
each of the questions and the implementation you provide.
Note: Code and Markdown cells can be executed using the Shift + Enter keyboard shortcut. In addition,
Markdown cells can be edited by typically double-clicking the cell to enter edit mode.
Getting Started
In this project, you will analyze a dataset containing data on various customers' annual spending
amounts (reported in monetary units) of diverse product categories for internal structure. One goal of
this project is to best describe the variation in the different types of customers that a wholesale
distributor interacts with. Doing so would equip the distributor with insight into how to best structure
their delivery service to meet the needs of each customer.
The dataset for this project can be found on the UCI Machine Learning Repository. For the purposes of
this project, the features 'Channel' and 'Region' will be excluded in the analysis — with focus instead on
the six product categories recorded for customers.
Run the code block below to load the wholesale customers dataset, along with a few of the necessary
Python libraries required for this project. You will know the dataset loaded successfully if the size of the
dataset is reported.
In [1]:
import numpy as np
import pandas as pd
from IPython.display import display # Allows the use of display() for DataFrames
import visuals as vs
%matplotlib inline
try:
data = pd.read_csv("customers.csv")
except:
Data Exploration
In this section, you will begin exploring the data through visualizations and code to understand how
each feature is related to the others. You will observe a statistical description of the dataset, consider
the relevance of each feature, and select a few sample data points from the dataset which you will track
through the course of this project.
Run the code block below to observe a statistical description of the dataset. Note that the dataset is
composed of six important product categories: 'Fresh', 'Milk', 'Grocery', 'Frozen', 'Detergents_Paper',
and 'Delicatessen'. Consider what each category represents in terms of products you could purchase.
In [3]:
display(data.describe())
Detergents_Pape
Fresh Milk Grocery Frozen Delicatessen
r
coun
440.000000 440.000000 440.000000 440.000000 440.000000 440.000000
t
75% 10655.75000
16933.750000 7190.250000 3554.250000 3922.000000 1820.250000
0
To get a better understanding of the customers and how their data will transform through the analysis,
it would be best to select a few sample data points and explore them in more detail. In the code block
below, add three indices of your choice to the indices list which will represent the customers to track. It
is suggested to try different sets of samples until you obtain customers that vary significantly from one
another.
In [4]:
# TODO: Select three indices of your choice you wish to sample from the dataset
display(samples)
import seaborn as sns
percentiles_data = 100*data.rank(pct=True)
percentiles_samples = percentiles_data.iloc[indices]
sns.heatmap(percentiles_samples, annot=True)
Grocer
Fresh Milk Frozen Detergents_Paper Delicatessen
y
0 622 55 137 75 7 8
Out[4]:
<matplotlib.axes._subplots.AxesSubplot at 0x7f675367ca90>
Question 1
Consider the total purchase cost of each product category and the statistical description of the dataset
above for your sample customers.
What kind of establishment (customer) could each of the three samples you've chosen
represent?
Hint: Examples of establishments include places like markets, cafes, delis, wholesale retailers, among
many others. Avoid using names for establishments, such as saying "McDonalds" when describing a
sample customer as a restaurant. You can use the mean values for reference to compare your samples
with. The mean values are as follows:
Fresh: 12000.2977
Milk: 5796.2
Grocery: 7951.3
Frozen : 3071.9
Detergents_paper: 2881.4
Delicatessen: 1524.8
Knowing this, how do your samples compare? Does that help in driving your insight into what kind of
establishments they might be?
Answer:
The establishment with indice 154 has an annual spending much lower than average belonging to the
lower 25 percentile for all the features. This shows that it could be a rather small business. We also see
that the majority of its annual spending covers "Fresh" products followed by "Grocery". This can be the
indicator that the establishement is a small restaurant.
The establishment with indice 181 has an annual spending much higher than average in all of the
features. It's in the upper 75 percentile for all category of products which makes me think that it might
be a very big hotel or for a chain of hotels.
The establishment with indice 338 has an annual spending overall lower than the mean except for the
products belonging to the "Frozen" category which is actually pretty high with an annual spending of
15601 and therefore belonging in the upper 75 percentile ot the dataset. Considering the Frozen
feature, it is actually about 2 std away from the mean on the upper side although all the other features
are rather low. This observation can lead us to believe that this establishment might be a market that
sells frozen product.
One interesting thought to consider is if one (or more) of the six product categories is actually relevant
for understanding customer purchasing. That is to say, is it possible to determine whether customers
purchasing some amount of one category of products will necessarily purchase some proportional
amount of another category of products? We can make this determination quite easily by training a
supervised regression learner on a subset of the data with one feature removed, and then score how
well that model can predict the removed feature.
In the code block below, you will need to implement the following:
Use the removed feature as your target label. Set a test_size of 0.25 and set
a random_state.
Import a decision tree regressor, set a random_state, and fit the learner to the training data.
Report the prediction score of the testing set using the regressor's score function.
In [12]:
# TODO: Make a copy of the DataFrame, using the 'drop' function to drop the given feature
# TODO: Split the data into training and testing sets(0.25) using the given feature as the target
all_scores = []
# TODO: Create a decision tree regressor and fit it to the training set
regressor = DecisionTreeRegressor()
regressor.fit(X_train, y_train)
# TODO: Report the score of the prediction using the testing set
all_scores.append(regressor.score(X_test, y_test))
score = np.mean(all_scores)
print "For feature \"{}\", the coefficient of determination is {}".format(feature, score)
score_feature(data, key)
Question 2
Hint: The coefficient of determination, R^2, is scored between 0 and 1, with 1 being a perfect fit. A
negative R^2 implies the model fails to fit the data. If you get a low score for a particular feature, that
lends us to beleive that that feature point is hard to predict using the other features, thereby making it
an important feature to consider when considering relevance.
Answer: I didn't set the random state because the coefficient of determination varied greatly when I
changed it so I instead calculated the mean of scores over 100 iterations to have a more general idea.
I first attempted to predict "Fresh" and the coefficient of determination was -0.708230948053. The
score being negative, it shows that no correlation with the other features has been found and that the
model fails to fit the data. This shows that "Fresh" provides unique information that cannot be predicted
by other features, therefore the feature "Fresh" should be necessary for identifying customers' spending
habits.
To get a better understanding of the dataset, we can construct a scatter matrix of each of the six product
features present in the data. If you found that the feature you attempted to predict above is relevant for
identifying a specific customer, then the scatter matrix below may not show any correlation between
that feature and the others. Conversely, if you believe that feature is not relevant for identifying a
specific customer, the scatter matrix might show a correlation between that feature and another feature
in the data. Run the code block below to produce a scatter matrix.
In [13]:
In [14]:
corr = data.corr()
mask = np.zeros_like(corr)
with sns.axes_style("white"):
Question 3
Using the scatter matrix as a reference, discuss the distribution of the dataset, specifically talk
about the normality, outliers, large number of data points near 0 among others. If you need to
sepearate out some of the plots individually to further accentuate your point, you may do so as
well.
Are there any pairs of features which exhibit some degree of correlation?
Does this confirm or deny your suspicions about the relevance of the feature you attempted to
predict?
Hint: Is the data normally distributed? Where do most of the data points lie? You can use corr() to get
the feature correlations and then visualize them using a heatmap(the data that would be fed into the
heatmap would be the correlation values, for eg: data.corr()) to gain further insight.
Answer:
The data appears to have a highly positively skewed distribution with all the features, showing the
presence of outliers with very high annual spendings for all features compared to the average. Indeed,
most of the data points are near zero. This biais could impact the algorithm performance and therefore
it is important to apply some kind of normalisation to make the features normally distributed.
We can see that "Grocery" and "Detergents_Paper" exhibit the highest correlation pair with 0.925. We
also see some correlation between "Grocery" and "Milk" (0.728); "Milk" and "Detergents_Paper" (0.662)
but the correlation is relatively mild. There are no obvious correlation pair with "Fresh", "Frozen" and
"Delicatessen".
This confirms what we observed previously, the coefficient of determination for predicting "Fresh" with
the other features was -0.708230948053. This score showed no obvious correlation with the other
features which corroborates our observation in the scatter matrix where no clear pattern can be
observed with any of the features.
Data Preprocessing
In this section, you will preprocess the data to create a better representation of customers by
performing a scaling on the data and detecting (and optionally removing) outliers. Preprocessing data is
often times a critical step in assuring that results you obtain from your analysis are significant and
meaningful.
If data is not normally distributed, especially if the mean and median vary significantly (indicating a large
skew), it is most often appropriate to apply a non-linear scaling — particularly for financial data. One
way to achieve this scaling is by using a Box-Cox test, which calculates the best power transformation of
the data that reduces skewness. A simpler approach which can work in most cases would be applying
the natural logarithm.
In the code block below, you will need to implement the following:
Assign a copy of the data to log_data after applying logarithmic scaling. Use the np.log function
for this.
Assign a copy of the sample data to log_samples after applying logarithmic scaling. Again,
use np.log.
In [15]:
log_data = np.log(data.copy())
# TODO: Scale the sample data using the natural logarithm
log_samples = np.log(samples)
Observation
After applying a natural logarithm scaling to the data, the distribution of each feature should appear
much more normal. For any pairs of features you may have identified earlier as being correlated,
observe here whether that correlation is still present (and whether it is now stronger or weaker than
before).
Run the code below to see how the sample data has changed after having the natural logarithm applied
to it.
In [16]:
display(log_samples)
1 10.29644
11.627601 9.806316 9.725855 8.506739 9.053687
1
Detecting outliers in the data is extremely important in the data preprocessing step of any analysis. The
presence of outliers can often skew results which take into consideration these data points. There are
many "rules of thumb" for what constitutes an outlier in a dataset. Here, we will use Tukey's Method for
identfying outliers: An outlier step is calculated as 1.5 times the interquartile range (IQR). A data point
with a feature that is beyond an outlier step outside of the IQR for that feature is considered abnormal.
In the code block below, you will need to implement the following:
Assign the value of the 25th percentile for the given feature to Q1. Use np.percentile for this.
Assign the value of the 75th percentile for the given feature to Q3. Again, use np.percentile.
Assign the calculation of an outlier step for the given feature to step.
Optionally remove data points from the dataset by adding indices to the outliers list.
NOTE: If you choose to remove any outliers, ensure that the sample data does not contain any of these
points!
Once you have performed this implementation, the dataset will be stored in the variable good_data.
In [17]:
# For each feature find the data points with extreme high or low values
# TODO: Calculate Q1 (25th percentile of the data) for the given feature
Q1 = np.percentile(log_data[feature], 25)
# TODO: Calculate Q3 (75th percentile of the data) for the given feature
Q3 = np.percentile(log_data[feature], 75)
# TODO: Use the interquartile range to calculate an outlier step (1.5 times the interquartile range)
step = 1.5*(Q3-Q1)
# OPTIONAL: Select the indices for data points you wish to remove
12
4.941642 9.087834 8.248791 4.955827 6.967909 1.098612
8
17 10.16053
5.298317 9.894245 6.478510 9.079434 8.740337
1 0
19
5.192957 8.156223 9.917982 6.865891 8.633731 6.501290
3
21
2.890372 8.923191 9.629380 7.158514 8.475746 8.759669
8
30
5.081404 8.917311 10.117510 6.424869 9.374413 7.787382
4
30
5.493061 9.468001 9.088399 6.683361 8.271037 5.351858
5
33
1.098612 5.808142 8.856661 9.655090 2.708050 6.309918
8
Fresh Milk Grocery Frozen Detergents_Paper Delicatessen
35
4.762174 8.742574 9.961898 5.429346 9.069007 7.013016
3
35
5.247024 6.588926 7.606885 5.501258 5.214936 4.844187
5
35
3.610918 7.150701 10.011086 4.919981 8.816853 4.700480
7
41
4.574711 8.190077 9.425452 4.584967 7.996317 4.127134
2
86 11.20501
10.039983 10.377047 6.894670 9.906981 6.805723
3
10
7.248504 9.724899 10.274568 6.511745 6.728629 1.098612
9
12
4.941642 9.087834 8.248791 4.955827 6.967909 1.098612
8
13
8.034955 8.997147 9.021840 6.493754 6.580639 3.583519
7
15
6.432940 4.007333 4.919981 4.317488 1.945910 2.079442
4
18
10.514529 10.690808 9.911952 10.505999 5.476464 10.777768
3
18
5.789960 6.822197 8.457443 4.304065 5.811141 2.397895
4
18
7.798933 8.987447 9.192075 8.743372 8.148735 1.098612
7
20
6.368187 6.529419 7.703459 6.150603 6.860664 2.890372
3
23
6.871091 8.513988 8.106515 6.842683 6.013715 1.945910
3
28
10.602965 6.461468 8.188689 6.948897 6.077642 2.890372
5
28
10.663966 5.655992 6.154858 7.235619 3.465736 3.091042
9
34
7.431892 8.848509 10.177932 7.283448 9.646593 3.610918
3
Question 4
Are there any data points considered outliers for more than one feature based on the definition
above?
Hint: If you have datapoints that are outliers in multiple categories think about why that may be and if
they warrant removal. Also note how k-means is affected by outliers and whether or not this plays a
factor in your analysis of whether or not to remove them.
Answer:
The following data points were found to be outliers in more than one feature based on the above
definition [65, 66, 75, 128, 154]. I think those points should be removed from the dataset as they might
induce wrong biaises to the model. Indeed because those are outliers for multiple feature this shows a
specific spending behavior that is not representative of the masse. Therefore I added them to
the outliers list above. I kept the data points that were flagged as outliers for only one feature because
removing them all would mean taking out a consequent amount of data points (42 in total),
representing 9.5% of the data this would generally not be recommended without a strong justification.
One other possibility could have been to increase the step size in Tukey's method to find the extreme
outliers.
Feature Transformation
In this section you will use principal component analysis (PCA) to draw conclusions about the underlying
structure of the wholesale customer data. Since using PCA on a dataset calculates the dimensions which
best maximize variance, we will find which compound combinations of features best describe
customers.
Implementation: PCA
Now that the data has been scaled to a more normal distribution and has had any necessary outliers
removed, we can now apply PCA to the good_data to discover which dimensions about the data best
maximize the variance of features involved. In addition to finding these dimensions, PCA will also report
the explained variance ratio of each dimension — how much variance within the data is explained by
that dimension alone. Note that a component (dimension) from PCA can be considered a new "feature"
of the space, however it is a composition of the original features present in the data.
In the code block below, you will need to implement the following:
In [19]:
# TODO: Apply PCA by fitting the good data with the same number of dimensions as features
pca = PCA(n_components=6).fit(good_data)
Dimension 1 0.4430
Dimension 2 0.7068
Dimension 3 0.8299
Dimension 4 0.9311
Dimension 5 0.9796
Dimension 6 1.0000
Question 5
How much variance in the data is explained in total by the first and second principal
component?
How much variance in the data is explained by the first four principal components?
Using the visualization provided above, talk about each dimension and the cumulative variance
explained by each, stressing upon which features are well represented by each dimension(both
in terms of positive and negative variance explained). Discuss what the first four dimensions
best represent in terms of customer spending.
Answer:
Cumulative
Dimensio Explained
Explained Discussion
n Variance
Variance
Variance explained in total by the first and second principal components : 0.7068
Observation
Run the code below to see how the log-transformed sample data has changed after having a PCA
transformation applied to it in six dimensions. Observe the numerical value for the first four dimensions
of the sample points. Consider if this is consistent with your initial interpretation of the sample points.
In [20]:
display(log_samples)
# Display sample log-data after having a PCA transformation applied
Dimension Dimension
Dimension 1 Dimension 3 Dimension 4 Dimension 6
2 5
When using principal component analysis, one of the main goals is to reduce the dimensionality of the
data — in effect, reducing the complexity of the problem. Dimensionality reduction comes at a cost:
Fewer dimensions used implies less of the total variance in the data is being explained. Because of this,
the cumulative explained variance ratio is extremely important for knowing how many dimensions are
necessary for the problem. Additionally, if a signifiant amount of variance is explained by only two or
three dimensions, the reduced data can be visualized afterwards.
In the code block below, you will need to implement the following:
In [21]:
# TODO: Apply PCA by fitting the good data with only two dimensions
pca = PCA(n_components=2).fit(good_data)
# TODO: Transform the good data using the PCA fit above
reduced_data = pca.transform(good_data)
pca_samples = pca.transform(log_samples)
Observation
Run the code below to see how the log-transformed sample data has changed after having a PCA
transformation applied to it using only two dimensions. Observe how the values for the first two
dimensions remains unchanged when compared to a PCA transformation in six dimensions.
In [22]:
Dimension 1 Dimension 2
0 6.6170 6.5320
1 -2.1899 -4.8605
2 3.0206 4.8169
Visualizing a Biplot
A biplot is a scatterplot where each data point is represented by its scores along the principal
components. The axes are the principal components (in this case Dimension 1 and Dimension 2). In
addition, the biplot shows the projection of the original features along the components. A biplot can
help us interpret the reduced dimensions of the data, and discover relationships between the principal
components and original features.
Run the code cell below to produce a biplot of the reduced-dimension data.
In [23]:
# Create a biplot
Out[23]:
<matplotlib.axes._subplots.AxesSubplot at 0x7f6748726f10>
Observation
Once we have the original feature projections (in red), it is easier to interpret the relative position of
each data point in the scatterplot. For instance, a point the lower right corner of the figure will likely
correspond to a customer that spends a lot on 'Milk', 'Grocery' and 'Detergents_Paper', but not so much
on the other product categories.
From the biplot, which of the original features are most strongly correlated with the first component?
What about those that are associated with the second component? Do these observations agree with
the pca_results plot you obtained earlier?
From the biplot, which of the original features are most strongly correlated with the first
component?
"Detergents_Paper", "Grocery" and "Milk" are the features most strongly correlated with the first
component because most of their magnitude can be described along the first component axis.
What about those that are associated with the second component?
"Delicatessen", "Frozen" and "Fresh" are the features most strongly correlated with the second
component because most of their magnitude can be described along the second component axis.
Do these observations agree with the pca_results plot you obtained earlier?
We see that the observations support the results obtained in the pca_results plot for dimension 1 and
dimension 2 as we can clearly see that the projection of the vectors onto each dimension correspond to
the PC plane with original features projections.
Clustering
In this section, you will choose to use either a K-Means clustering algorithm or a Gaussian Mixture
Model clustering algorithm to identify the various customer segments hidden in the data. You will then
recover specific data points from the clusters to understand their significance by transforming them
back into their original dimension and scale.
Question 6
What are the advantages to using a Gaussian Mixture Model clustering algorithm?
Given your observations about the wholesale customer data so far, which of the two algorithms
will you use and why?
Hint: Think about the differences between hard clustering and soft clustering and which would be
appropriate for our dataset.
Answer:
K-Means advantages:
Fast and efficient: time complexity of K Means is linear while that of hierarchical clustering is
quadratic1
The main criteria I would take into account to choose between these two algorithms are the speed VS
second order information desired and the underlying structure of the data.
Since our dataset is quite small and scalability would not be an issue I would use the Gaussian Mixture
Model because ranking the data points with their likelihood (soft classification) gives more flexibility and
insight about the result than K-means (hard classification).
Depending on the problem, the number of clusters that you expect to be in the data may already be
known. When the number of clusters is not known a priori, there is no guarantee that a given number of
clusters best segments the data, since it is unclear what structure exists in the data — if any. However,
we can quantify the "goodness" of a clustering by calculating each data point's silhouette coefficient.
The silhouette coefficient for a data point measures how similar it is to its assigned cluster from -1
(dissimilar) to 1 (similar). Calculating the mean silhouette coefficient provides for a simple scoring
method of a given clustering.
In the code block below, you will need to implement the following:
Predict the cluster for each data point in reduced_data using clusterer.predict and assign them
to preds.
Find the cluster centers using the algorithm's respective attribute and assign them to centers.
Predict the cluster for each sample data point in pca_samples and assign them sample_preds.
In [32]:
'''
This is why you are getting approximately the same silhouette score in each iteration.
'''
clusterer = GaussianMixture(n_components=n).fit(reduced_data)
# clusterer = KMeans(n_clusters=2).fit(reduced_data)
preds = clusterer.predict(reduced_data)
# TODO: Find the cluster centers
centers = clusterer.means_
# centers = clusterer.cluster_centers_
# TODO: Predict the cluster for each transformed sample data point
sample_preds = clusterer.predict(pca_samples)
# TODO: Calculate the mean silhouette coefficient for the number of clusters chosen
n=10
for i in range(1,10):
i += 1
n -= 1
Question 7
Report the silhouette score for several cluster numbers you tried.
Answer:
Cluster Visualization
Once you've chosen the optimal number of clusters for your clustering algorithm using the scoring
metric above, you can now visualize the results by executing the code block below. Note that, for
experimentation purposes, you are welcome to adjust the number of clusters for your clustering
algorithm to see various visualizations. The final visualization provided should, however, correspond
with the optimal number of clusters.
In [34]:
In the code block below, you will need to implement the following:
In [37]:
log_centers = pca.inverse_transform(centers)
true_centers = np.exp(log_centers)
true_centers.index = segments
display(true_centers)
display(np.exp(good_data).describe())
Segmen 7860.
3567.0 12249.0 873.0 4713.0 966.0
t0 0
Fresh Milk Grocery Frozen Detergents_Paper Delicatessen
Segmen 2108.
8939.0 2758.0 2073.0 352.0 730.0
t1 0
Detergents_Pap
Fresh Milk Grocery Frozen Delicatessen
er
75% 10665.50000
16934.500000 7168.000000 3559.500000 3935.000000 1825.500000
0
Question 8
Consider the total purchase cost of each product category for the representative data points
above, and reference the statistical description of the dataset at the beginning of this
project(specifically looking at the mean values for the various feature points). What set of
establishments could each of the customer segments represent?
Hint: A customer who is assigned to 'Cluster X' should best identify with the establishments represented
by the feature set of 'Segment X'. Think about what each segment represents in terms their values for
the feature points chosen. Reference these values with the mean values to get some perspective into
what kind of establishment they represent.
Answer: The value for "Milk", "Grocery" and "Detergents_Paper" in segment 0 (cluster 0) belong to the
upper percentile 75 of the dataset and could therefore help identify the establishments such as big and
small shops/retailers reselling primarely groceries, dairies and household products.
The value for "Fresh" and "Frozen" in segment 1 (Cluster 1) are above the median of the dataset and
could help identify the establishments such as restaurants/canteens needing primarely fresh products
but also need some frozen food.
Question 9
For each sample point, which customer segment from Question 8 best represents it?
Are the predictions for each sample point consistent with this?*
Run the code block below to find which cluster each sample point is predicted to be.
In [43]:
Answer:
The sample point 0 corresponds to the establishment index 154 and was estimated to be a small
restaurant. It is indeed consistent with the classification in the Cluster 1 that I above described.
The sample point 1 corresponds to the establishment index 181. Unfortunately this data point was
removed from the dataset because it was considered an outlier in a previous section. I estimated this
customer to be a big Hotel or a chain of hotels as it was spending much more in all features than the
average. It has been classified in the Cluster 0 that I described as representing mainly establishments
being big and small shops/retailers. Being an outlier makes this categorization complicated to interpret
without having a closer look to the data.
The sample point 2 corresponds to the establishment index 338 and was estimated to be a market that
sells frozen product. It was classified in the Cluster 1. It is consistent with the observation that the
Cluster 1 includes establishments with higher spendings on frozen products than the ones in Cluster 0.
Conclusion
In this final section, you will investigate ways that you can make use of the clustered data. First, you will
consider how the different groups of customers, the customer segments, may be affected differently by
a specific delivery scheme. Next, you will consider how giving a label to each customer
(which segment that customer belongs to) can provide for additional features about the customer data.
Finally, you will compare the customer segments to a hidden variable present in the data, to see
whether the clustering identified certain relationships.
Question 10
Companies will often run A/B tests when making small changes to their products or services to
determine whether making that change will affect its customers positively or negatively. The wholesale
distributor is considering changing its delivery service from currently 5 days a week to 3 days a week.
However, the distributor will only make this change in delivery service for customers that react
positively.
How can the wholesale distributor use the customer segments to determine which customers, if
any, would react positively to the change in delivery service?*
Hint: Can we assume the change affects all customers equally? How can we determine which group of
customers it affects the most?
Answer:
We can run A/B tests separately for each segment where a certain percentage of customers are offered
the 3 days a week option (group A) instead of the 5 days a week option (group B). A representative
proportions of customers from each segments should be tested for the new option. This would allow to
determine if there is a correlation between how well the new option is received. At the end of a defined
period, it would be possible to compare the revenu made by the distributor and determine if customers
reacted positively or negatively depending on segments and groups (A or B).
Question 11
Additional structure is derived from originally unlabeled data when using clustering techniques. Since
each customer has a customer segment it best identifies with (depending on the clustering algorithm
applied), we can consider 'customer segment' as an engineered feature for the data. Assume the
wholesale distributor recently acquired ten new customers and each provided estimates for anticipated
annual spending of each product category. Knowing these estimates, the wholesale distributor wants to
classify each new customer to a customer segment to determine the most appropriate delivery service.
How can the wholesale distributor label the new customers using only their estimated product
spending and the customer segment data?
Hint: A supervised learner could be used to train on the original customers. What would be the target
variable?
Answer:
Now that we have customer segments fairly well described, we can engineer those segments as new
features for the data. Because there are only two segments, we can apply this to a binary supervised
learner. We can consider the feature "segment" the target variable (labels), split the data into training
and testing sets and train the supervised learner on the training set. The learner should be tested for its
performance and if the performance is satisfying, then the learner should be able to predict with a
known accuracy in which new customers should belong only with their features (which are the provided
estimates for anticipated annual spending of each product category).
Run the code block below to see how each data point is labeled either 'HoReCa' (Hotel/Restaurant/Cafe)
or 'Retail' the reduced space. In addition, you will find the sample points are circled in the plot, which
will identify their labeling.
In [44]:
Question 12
How well does the clustering algorithm and number of clusters you've chosen compare to this
underlying distribution of Hotel/Restaurant/Cafe customers to Retailer customers?
Would you consider these classifications as consistent with your previous definition of the
customer segments?
Answer:
The distribution of segments found with the Gaussian Mixture Model is quite similar to the
above distribution of Hotel/Restaurant/Cafe customers to Retailer customers. We can see that
the centroids are very close although the frontier isn't as sharp. This confirms that the GMM was
a good choice of algorithm, as the clusters do have a fair amount of overlap in reality. Soft
clustering gives us confidence levels in our predictions, which would understandably be low at
the boundary between two clusters.
When looking at the data we see that their are parts of the segment that would be classified as
purely Retailer for x-values < -3 and as purely Hotels/Restaurants/Cafes for x-values > 2. More
complicated shapes could be described to better estimate those areas.
My definition of the customer segments were:
Segment 0 (Cluster 0) could help identify the establishments such as big and small shops/retailers
reselling primarely groceries, dairies and household products.
Segment 1 (Cluster 1) could help identify the establishments such as restaurants/canteens needing
primarely fresh products but also need some frozen food, dairies and groceries.
Note: Once you have completed all of the code implementations and successfully answered each
question above, you may finalize your work by exporting the iPython Notebook as an HTML document.
You can do this by using the menu above and navigating to
File -> Download as -> HTML (.html). Include the finished document along with this notebook as your
submission.