Creating a system that automatically recommends a certain number of products to the consumers on an E-commerce website based on the past purchase behavior of the consumers.
A person involved in sports-related activities might have an online buying pattern similar to this:
If we can represent each of these products by a vector, then we can easily find similar products. So, if a user is checking out a product online, then we can easily recommend him/her similar products by using the vector similarity score between the products.
#hide
import pandas as pd
import numpy as np
import random
from tqdm import tqdm
from gensim.models import Word2Vec
import matplotlib.pyplot as plt
%matplotlib inline
import warnings;
warnings.filterwarnings('ignore')
#hide-output
!wget https://archive.ics.uci.edu/ml/machine-learning-databases/00352/Online%20Retail.xlsx
--2021-04-24 08:19:50-- https://archive.ics.uci.edu/ml/machine-learning-databases/00352/Online%20Retail.xlsx Resolving archive.ics.uci.edu (archive.ics.uci.edu)... 128.195.10.252 Connecting to archive.ics.uci.edu (archive.ics.uci.edu)|128.195.10.252|:443... connected. HTTP request sent, awaiting response... 200 OK Length: 23715344 (23M) [application/x-httpd-php] Saving to: ‘Online Retail.xlsx’ Online Retail.xlsx 100%[===================>] 22.62M 22.7MB/s in 1.0s 2021-04-24 08:19:51 (22.7 MB/s) - ‘Online Retail.xlsx’ saved [23715344/23715344]
df = pd.read_excel('Online Retail.xlsx')
df.head()
InvoiceNo | StockCode | Description | Quantity | InvoiceDate | UnitPrice | CustomerID | Country | |
---|---|---|---|---|---|---|---|---|
0 | 536365 | 85123A | WHITE HANGING HEART T-LIGHT HOLDER | 6 | 2010-12-01 08:26:00 | 2.55 | 17850.0 | United Kingdom |
1 | 536365 | 71053 | WHITE METAL LANTERN | 6 | 2010-12-01 08:26:00 | 3.39 | 17850.0 | United Kingdom |
2 | 536365 | 84406B | CREAM CUPID HEARTS COAT HANGER | 8 | 2010-12-01 08:26:00 | 2.75 | 17850.0 | United Kingdom |
3 | 536365 | 84029G | KNITTED UNION FLAG HOT WATER BOTTLE | 6 | 2010-12-01 08:26:00 | 3.39 | 17850.0 | United Kingdom |
4 | 536365 | 84029E | RED WOOLLY HOTTIE WHITE HEART. | 6 | 2010-12-01 08:26:00 | 3.39 | 17850.0 | United Kingdom |
Given below is the description of the fields in this dataset:
InvoiceNo: Invoice number, a unique number assigned to each transaction.
StockCode: Product/item code. a unique number assigned to each distinct product.
Description: Product description
Quantity: The quantities of each product per transaction.
InvoiceDate: Invoice Date and time. The day and time when each transaction was generated.
CustomerID: Customer number, a unique number assigned to each customer.
# check for missing values
df.isnull().sum()
InvoiceNo 0 StockCode 0 Description 1454 Quantity 0 InvoiceDate 0 UnitPrice 0 CustomerID 135080 Country 0 dtype: int64
Since we have sufficient data, we will drop all the rows with missing values.
# remove missing values
df.dropna(inplace=True)
# again check missing values
df.isnull().sum()
InvoiceNo 0 StockCode 0 Description 0 Quantity 0 InvoiceDate 0 UnitPrice 0 CustomerID 0 Country 0 dtype: int64
# Convert the StockCode to string datatype
df['StockCode']= df['StockCode'].astype(str)
# Check out the number of unique customers in our dataset
customers = df["CustomerID"].unique().tolist()
len(customers)
4372
There are 4,372 customers in our dataset. For each of these customers we will extract their buying history. In other words, we can have 4,372 sequences of purchases.
It is a good practice to set aside a small part of the dataset for validation purpose. Therefore, we will use data of 90% of the customers to create word2vec embeddings. Let's split the data.
# shuffle customer ID's
random.shuffle(customers)
# extract 90% of customer ID's
customers_train = [customers[i] for i in range(round(0.9*len(customers)))]
# split data into train and validation set
train_df = df[df['CustomerID'].isin(customers_train)]
validation_df = df[~df['CustomerID'].isin(customers_train)]
Let's create sequences of purchases made by the customers in the dataset for both the train and validation set.
# list to capture purchase history of the customers
purchases_train = []
# populate the list with the product codes
for i in tqdm(customers_train):
temp = train_df[train_df["CustomerID"] == i]["StockCode"].tolist()
purchases_train.append(temp)
100%|██████████| 3935/3935 [00:05<00:00, 664.97it/s]
# list to capture purchase history of the customers
purchases_val = []
# populate the list with the product codes
for i in tqdm(validation_df['CustomerID'].unique()):
temp = validation_df[validation_df["CustomerID"] == i]["StockCode"].tolist()
purchases_val.append(temp)
100%|██████████| 437/437 [00:00<00:00, 1006.50it/s]
# train word2vec model
model = Word2Vec(window = 10, sg = 1, hs = 0,
negative = 10, # for negative sampling
alpha=0.03, min_alpha=0.0007,
seed = 14)
model.build_vocab(purchases_train, progress_per=200)
model.train(purchases_train, total_examples = model.corpus_count,
epochs=10, report_delay=1)
(3657318, 3696290)
# save word2vec model
model.save("word2vec_2.model")
As we do not plan to train the model any further, we are calling init_sims(), which will make the model much more memory-efficient
model.init_sims(replace=True)
print(model)
Word2Vec(vocab=3153, size=100, alpha=0.03)
Now we will extract the vectors of all the words in our vocabulary and store it in one place for easy access
# extract all vectors
X = model[model.wv.vocab]
X.shape
(3153, 100)
It is always quite helpful to visualize the embeddings that you have created. Over here we have 100 dimensional embeddings. We can't even visualize 4 dimensions let alone 100. Therefore, we are going to reduce the dimensions of the product embeddings from 100 to 2 by using the UMAP algorithm, it is used for dimensionality reduction.
#hide
!pip install umap-learn
Requirement already satisfied: umap-learn in /usr/local/lib/python3.7/dist-packages (0.5.1) Requirement already satisfied: numba>=0.49 in /usr/local/lib/python3.7/dist-packages (from umap-learn) (0.51.2) Requirement already satisfied: numpy>=1.17 in /usr/local/lib/python3.7/dist-packages (from umap-learn) (1.19.5) Requirement already satisfied: pynndescent>=0.5 in /usr/local/lib/python3.7/dist-packages (from umap-learn) (0.5.2) Requirement already satisfied: scipy>=1.0 in /usr/local/lib/python3.7/dist-packages (from umap-learn) (1.4.1) Requirement already satisfied: scikit-learn>=0.22 in /usr/local/lib/python3.7/dist-packages (from umap-learn) (0.22.2.post1) Requirement already satisfied: setuptools in /usr/local/lib/python3.7/dist-packages (from numba>=0.49->umap-learn) (56.0.0) Requirement already satisfied: llvmlite<0.35,>=0.34.0.dev0 in /usr/local/lib/python3.7/dist-packages (from numba>=0.49->umap-learn) (0.34.0) Requirement already satisfied: joblib>=0.11 in /usr/local/lib/python3.7/dist-packages (from pynndescent>=0.5->umap-learn) (1.0.1)
#collapse
import umap
cluster_embedding = umap.UMAP(n_neighbors=30, min_dist=0.0,
n_components=2, random_state=42).fit_transform(X)
plt.figure(figsize=(10,9))
plt.scatter(cluster_embedding[:, 0], cluster_embedding[:, 1], s=3, cmap='Spectral');
Every dot in this plot is a product. As you can see, there are several tiny clusters of these datapoints. These are groups of similar products.
We are finally ready with the word2vec embeddings for every product in our online retail dataset. Now our next step is to suggest similar products for a certain product or a product's vector.
Let's first create a product-ID and product-description dictionary to easily map a product's description to its ID and vice versa.
products = train_df[["StockCode", "Description"]]
# remove duplicates
products.drop_duplicates(inplace=True, subset='StockCode', keep="last")
# create product-ID and product-description dictionary
products_dict = products.groupby('StockCode')['Description'].apply(list).to_dict()
# test the dictionary
products_dict['84029E']
['RED WOOLLY HOTTIE WHITE HEART.']
We have defined the function below. It will take a product's vector (n) as input and return top 6 similar products.
#hide
def similar_products(v, n = 6):
# extract most similar products for the input vector
ms = model.similar_by_vector(v, topn= n+1)[1:]
# extract name and similarity score of the similar products
new_ms = []
for j in ms:
pair = (products_dict[j[0]][0], j[1])
new_ms.append(pair)
return new_ms
Let's try out our function by passing the vector of the product '90019A' ('SILVER M.O.P ORBIT BRACELET')
similar_products(model['90019A'])
[('SILVER M.O.P ORBIT DROP EARRINGS', 0.7879312634468079), ('AMBER DROP EARRINGS W LONG BEADS', 0.7682332992553711), ('JADE DROP EARRINGS W FILIGREE', 0.761816143989563), ('DROP DIAMANTE EARRINGS PURPLE', 0.7489826679229736), ('SILVER LARIAT BLACK STONE EARRINGS', 0.7389366626739502), ('WHITE VINT ART DECO CRYSTAL NECKLAC', 0.7352254390716553)]
Cool! The results are pretty relevant and match well with the input product. However, this output is based on the vector of a single product only. What if we want recommend a user products based on the multiple purchases he or she has made in the past?
One simple solution is to take average of all the vectors of the products he has bought so far and use this resultant vector to find similar products. For that we will use the function below that takes in a list of product ID's and gives out a 100 dimensional vector which is mean of vectors of the products in the input list.
#collapse
def aggregate_vectors(products):
product_vec = []
for i in products:
try:
product_vec.append(model[i])
except KeyError:
continue
return np.mean(product_vec, axis=0)
If you can recall, we have already created a separate list of purchase sequences for validation purpose. Now let's make use of that.
#hide
len(purchases_val[0])
28
The length of the first list of products purchased by a user is 314. We will pass this products' sequence of the validation set to the function aggregate_vectors.
#hide
aggregate_vectors(purchases_val[0]).shape
(100,)
Well, the function has returned an array of 100 dimension. It means the function is working fine. Now we can use this result to get the most similar products. Let's do it.
similar_products(aggregate_vectors(purchases_val[0]))
[('WHITE SPOT BLUE CERAMIC DRAWER KNOB', 0.6860978603363037), ('RED SPOT CERAMIC DRAWER KNOB', 0.6785424947738647), ('BLUE STRIPE CERAMIC DRAWER KNOB', 0.6783121824264526), ('BLUE SPOT CERAMIC DRAWER KNOB', 0.6738985776901245), ('CLEAR DRAWER KNOB ACRYLIC EDWARDIAN', 0.6731897592544556), ('RED STRIPE CERAMIC DRAWER KNOB', 0.6667704582214355)]
As it turns out, our system has recommended 6 products based on the entire purchase history of a user. Moreover, if you want to get products suggestions based on the last few purchases only then also you can use the same set of functions.
Below we are giving only the last 10 products purchased as input.
similar_products(aggregate_vectors(purchases_val[0][-10:]))
[('BLUE SPOT CERAMIC DRAWER KNOB', 0.7394766807556152), ('RED SPOT CERAMIC DRAWER KNOB', 0.7364704012870789), ('WHITE SPOT BLUE CERAMIC DRAWER KNOB', 0.7347637414932251), ('ASSORTED COLOUR BIRD ORNAMENT', 0.7345550060272217), ('RED STRIPE CERAMIC DRAWER KNOB', 0.7305896878242493), ('WHITE SPOT RED CERAMIC DRAWER KNOB', 0.6979628801345825)]