A tutorial to demonstrate the process of training and evaluating various recommender models on a online retail store data. Along with the positive feedbacks like view, add-to-cart, we also have a negative event 'remove-from-cart'.
#hide
!pip install git+https://github.com/maciejkula/spotlight.git@master#egg=spotlight
!git clone https://github.com/microsoft/recommenders.git
!pip install cornac
!pip install pandas==0.25.0
Collecting spotlight Cloning https://github.com/maciejkula/spotlight.git (to revision master) to /tmp/pip-install-lriz796i/spotlight Running command git clone -q https://github.com/maciejkula/spotlight.git /tmp/pip-install-lriz796i/spotlight Requirement already satisfied: torch>=0.4.0 in /usr/local/lib/python3.7/dist-packages (from spotlight) (1.8.1+cu101) Requirement already satisfied: typing-extensions in /usr/local/lib/python3.7/dist-packages (from torch>=0.4.0->spotlight) (3.7.4.3) Requirement already satisfied: numpy in /usr/local/lib/python3.7/dist-packages (from torch>=0.4.0->spotlight) (1.19.5) Building wheels for collected packages: spotlight Building wheel for spotlight (setup.py) ... done Created wheel for spotlight: filename=spotlight-0.1.6-cp37-none-any.whl size=33921 sha256=14d494182314d40e31c6caabd4cc6723fa7bc9428606a9a11882e6bce2ece08f Stored in directory: /tmp/pip-ephem-wheel-cache-jxgos5jf/wheels/de/0f/85/71b17c59d32cb9d29c31d4ffb16eb14f4cc153f1e28c7a3efb Successfully built spotlight Installing collected packages: spotlight Successfully installed spotlight-0.1.6 Cloning into 'recommenders'... remote: Enumerating objects: 25279, done. remote: Counting objects: 100% (286/286), done. remote: Compressing objects: 100% (231/231), done. remote: Total 25279 (delta 115), reused 152 (delta 53), pack-reused 24993 Receiving objects: 100% (25279/25279), 197.06 MiB | 27.41 MiB/s, done. Resolving deltas: 100% (16508/16508), done. Collecting cornac Downloading https://files.pythonhosted.org/packages/d0/d4/6fe35df0366b3b4812e9a9ef8df413313f3091c61db9eeea13d10b514590/cornac-1.12.0-cp37-cp37m-manylinux1_x86_64.whl (13.4MB) |████████████████████████████████| 13.4MB 323kB/s Requirement already satisfied: scipy in /usr/local/lib/python3.7/dist-packages (from cornac) (1.4.1) Requirement already satisfied: tqdm>=4.19 in /usr/local/lib/python3.7/dist-packages (from cornac) (4.41.1) Requirement already satisfied: numpy in /usr/local/lib/python3.7/dist-packages (from cornac) (1.19.5) Installing collected packages: cornac Successfully installed cornac-1.12.0 Collecting pandas==0.25.0 Downloading https://files.pythonhosted.org/packages/3b/42/dc1f4820b95fbdbc9352ec9ad0f0c40db2122e1f2440ea53c7f9fbccf2b8/pandas-0.25.0-cp37-cp37m-manylinux1_x86_64.whl (10.4MB) |████████████████████████████████| 10.4MB 5.2MB/s Requirement already satisfied: numpy>=1.13.3 in /usr/local/lib/python3.7/dist-packages (from pandas==0.25.0) (1.19.5) Requirement already satisfied: pytz>=2017.2 in /usr/local/lib/python3.7/dist-packages (from pandas==0.25.0) (2018.9) Requirement already satisfied: python-dateutil>=2.6.1 in /usr/local/lib/python3.7/dist-packages (from pandas==0.25.0) (2.8.1) Requirement already satisfied: six>=1.5 in /usr/local/lib/python3.7/dist-packages (from python-dateutil>=2.6.1->pandas==0.25.0) (1.15.0) ERROR: google-colab 1.0.0 has requirement pandas~=1.1.0; python_version >= "3.0", but you'll have pandas 0.25.0 which is incompatible. ERROR: fbprophet 0.7.1 has requirement pandas>=1.0.4, but you'll have pandas 0.25.0 which is incompatible. Installing collected packages: pandas Found existing installation: pandas 1.1.5 Uninstalling pandas-1.1.5: Successfully uninstalled pandas-1.1.5 Successfully installed pandas-0.25.0
#hide
import os
import sys
import math
import random
import datetime
import itertools
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
from sklearn import preprocessing
from scipy.sparse import csr_matrix, dok_matrix
from sklearn.model_selection import ParameterGrid
from fastai.collab import *
from fastai.tabular import *
from fastai.text import *
import cornac
from spotlight.interactions import Interactions
from spotlight.interactions import SequenceInteractions
from spotlight.cross_validation import random_train_test_split
from spotlight.cross_validation import user_based_train_test_split
from spotlight.factorization.implicit import ImplicitFactorizationModel
from spotlight.evaluation import mrr_score
from spotlight.evaluation import precision_recall_score
from spotlight.interactions import Interactions
from spotlight.cross_validation import random_train_test_split
from spotlight.cross_validation import user_based_train_test_split
from spotlight.factorization.implicit import ImplicitFactorizationModel
from spotlight.evaluation import mrr_score
from spotlight.evaluation import precision_recall_score
from spotlight.interactions import SequenceInteractions
from spotlight.sequence.implicit import ImplicitSequenceModel
from spotlight.evaluation import sequence_mrr_score
from spotlight.evaluation import sequence_precision_recall_score
import warnings
warnings.filterwarnings("ignore")
#hide
sys.path.append('/content/recommenders/')
from reco_utils.dataset.python_splitters import python_chrono_split
from reco_utils.evaluation.python_evaluation import map_at_k
from reco_utils.evaluation.python_evaluation import precision_at_k
from reco_utils.evaluation.python_evaluation import ndcg_at_k
from reco_utils.evaluation.python_evaluation import recall_at_k
from reco_utils.evaluation.python_evaluation import get_top_k_items
from reco_utils.recommender.cornac.cornac_utils import predict_ranking
# loading data
df = pd.read_csv('rawdata.csv', header = 0,
names = ['event','userid','itemid','timestamp'],
dtype={0:'category', 1:'category', 2:'category'},
parse_dates=['timestamp'])
df.head()
event | userid | itemid | timestamp | |
---|---|---|---|---|
0 | view_item | 2763227 | 11056 | 2020-01-13 16:05:31.244000+00:00 |
1 | add_to_cart | 2828666 | 14441 | 2020-01-13 22:36:38.680000+00:00 |
2 | view_item | 0620225789 | 14377 | 2020-01-14 10:54:41.886000+00:00 |
3 | view_item | 0620225789 | 14377 | 2020-01-14 10:54:47.692000+00:00 |
4 | add_to_cart | 0620225789 | 14377 | 2020-01-14 10:54:48.479000+00:00 |
df.info()
<class 'pandas.core.frame.DataFrame'> RangeIndex: 99998 entries, 0 to 99997 Data columns (total 4 columns): event 99998 non-null category userid 99998 non-null category itemid 99998 non-null category timestamp 99998 non-null datetime64[ns, UTC] dtypes: category(3), datetime64[ns, UTC](1) memory usage: 1.7 MB
# dropping exact duplicates
df = df.drop_duplicates()
# userid normalization
userid_encoder = preprocessing.LabelEncoder()
df.userid = userid_encoder.fit_transform(df.userid)
# itemid normalization
itemid_encoder = preprocessing.LabelEncoder()
df.itemid = itemid_encoder.fit_transform(df.itemid)
df.describe().T
count | mean | std | min | 25% | 50% | 75% | max | |
---|---|---|---|---|---|---|---|---|
userid | 99432.0 | 4682.814677 | 3011.178734 | 0.0 | 2507.0 | 3687.0 | 6866.0 | 11476.0 |
itemid | 99432.0 | 1344.579964 | 769.627122 | 0.0 | 643.0 | 1356.0 | 1997.0 | 2633.0 |
df.describe(exclude='int').T
count | unique | top | freq | first | last | |
---|---|---|---|---|---|---|
event | 99432 | 5 | begin_checkout | 41459 | NaT | NaT |
timestamp | 99432 | 61372 | 2020-01-16 04:21:49.377000+00:00 | 25 | 2020-01-13 16:05:31.244000+00:00 | 2020-03-10 13:02:21.376000+00:00 |
df.timestamp.max() - df.timestamp.min()
Timedelta('56 days 20:56:50.132000')
df.event.value_counts()
begin_checkout 41459 view_item 35397 purchase 9969 add_to_cart 7745 remove_from_cart 4862 Name: event, dtype: int64
df.event.value_counts()/df.userid.nunique()
begin_checkout 3.612355 view_item 3.084168 purchase 0.868607 add_to_cart 0.674828 remove_from_cart 0.423630 Name: event, dtype: float64
#hide-input
# User events
user_activity_count = dict()
for row in df.itertuples():
if row.userid not in user_activity_count:
user_activity_count[row.userid] = {'view_item':0,
'add_to_cart':0,
'begin_checkout':0,
'remove_from_cart':0,
'purchase':0}
if row.event == 'view_item':
user_activity_count[row.userid]['view_item'] += 1
elif row.event == 'add_to_cart':
user_activity_count[row.userid]['add_to_cart'] += 1
elif row.event == 'begin_checkout':
user_activity_count[row.userid]['begin_checkout'] += 1
elif row.event == 'remove_from_cart':
user_activity_count[row.userid]['remove_from_cart'] += 1
elif row.event == 'purchase':
user_activity_count[row.userid]['purchase'] += 1
user_activity = pd.DataFrame(user_activity_count)
user_activity = user_activity.transpose()
user_activity['activity'] = user_activity.sum(axis=1)
tempDF = pd.DataFrame(user_activity.activity.value_counts()).reset_index()
tempDF.columns = ['#Interactions','#Users']
sns.scatterplot(x='#Interactions', y='#Users', data=tempDF);
#hide
df_activity = user_activity.copy()
event = df_activity.columns.astype('str')
sns.countplot(df_activity.loc[df_activity[event[0]]>0,event[0]]);
#hide-input
sns.countplot(df_activity.loc[df_activity[event[1]]>0,event[1]])
plt.show()
#hide-input
sns.countplot(df_activity.loc[df_activity[event[4]]>0,event[4]])
plt.show()
#hide-input
# item events
item_activity_count = dict()
for row in df.itertuples():
if row.itemid not in item_activity_count:
item_activity_count[row.itemid] = {'view_item':0,
'add_to_cart':0,
'begin_checkout':0,
'remove_from_cart':0,
'purchase':0}
if row.event == 'view_item':
item_activity_count[row.itemid]['view_item'] += 1
elif row.event == 'add_to_cart':
item_activity_count[row.itemid]['add_to_cart'] += 1
elif row.event == 'begin_checkout':
item_activity_count[row.itemid]['begin_checkout'] += 1
elif row.event == 'remove_from_cart':
item_activity_count[row.itemid]['remove_from_cart'] += 1
elif row.event == 'purchase':
item_activity_count[row.itemid]['purchase'] += 1
item_activity = pd.DataFrame(item_activity_count)
item_activity = item_activity.transpose()
item_activity['activity'] = item_activity.sum(axis=1)
tempDF = pd.DataFrame(item_activity.activity.value_counts()).reset_index()
tempDF.columns = ['#Interactions','#Items']
sns.scatterplot(x='#Interactions', y='#Items', data=tempDF);
#hide
plt.rcParams['figure.figsize'] = 15,3
data = pd.DataFrame(pd.to_datetime(df['timestamp'], infer_datetime_format=True))
data['Count'] = 1
data.set_index('timestamp', inplace=True)
data = data.resample('D').apply({'Count':'count'})
ax = data['Count'].plot(marker='o', linestyle='-')
#collapse
def top_trending(n, timeperiod, timestamp):
start = str(timestamp.replace(microsecond=0) - pd.Timedelta(minutes=timeperiod))
end = str(timestamp.replace(microsecond=0))
trending_items = df.loc[(df.timestamp.between(start,end) & (df.event=='view_item')),:].sort_values('timestamp', ascending=False)
return trending_items.itemid.value_counts().index[:n]
user_current_time = df.timestamp[100]
top_trending(5, 50, user_current_time)
Int64Index([2241, 972, 393, 1118, 126], dtype='int64')
#collapse
def least_n_items(n=10):
temp1 = df.loc[df.event=='view_item'].groupby(['itemid'])['event'].count().sort_values(ascending=True).reset_index()
temp2 = df.groupby('itemid').timestamp.max().reset_index()
item_ids = pd.merge(temp1,temp2,on='itemid').sort_values(['event', 'timestamp'], ascending=[True, False]).reset_index().loc[:n-1,'itemid']
return itemid_encoder.inverse_transform(item_ids.values)
least_n_items(10)
array(['15742', '16052', '16443', '16074', '16424', '11574', '11465', '16033', '11711', '16013'], dtype=object)
Many times there are no explicit ratings or preferences given by users, that is, the interactions are usually implicit. This information may reflect users' preference towards the items in an implicit manner.
Option 1 - Simple Count: The most simple technique is to count times of interactions between user and item for producing affinity scores.
Option 2 - Weighted Count: It is useful to consider the types of different interactions as weights in the count aggregation. For example, assuming weights of the three differen types, "click", "add", and "purchase", are 1, 2, and 3, respectively.
Option 3 - Time-dependent Count: In many scenarios, time dependency plays a critical role in preparing dataset for building a collaborative filtering model that captures user interests drift over time. One of the common techniques for achieving time dependent count is to add a time decay factor in the counting.
#collapse
data_count = df.groupby(['userid', 'itemid']).agg({'timestamp': 'count'}).reset_index()
data_count.columns = ['userid', 'itemid', 'affinity']
data_count.head()
userid | itemid | affinity | |
---|---|---|---|
0 | 0 | 328 | 1 |
1 | 1 | 1122 | 1 |
2 | 1 | 1204 | 1 |
3 | 1 | 1271 | 1 |
4 | 1 | 1821 | 1 |
#hide
data_w = df.loc[df.event!='remove_from_cart',:]
affinity_weights = {
'view_item': 1,
'add_to_cart': 3,
'begin_checkout': 5,
'purchase': 6,
'remove_from_cart': 3
}
data_w['event'].apply(lambda x: affinity_weights[x])
data_w.head()
event | userid | itemid | timestamp | |
---|---|---|---|---|
0 | view_item | 3141 | 236 | 2020-01-13 16:05:31.244000+00:00 |
1 | add_to_cart | 3421 | 1001 | 2020-01-13 22:36:38.680000+00:00 |
2 | view_item | 550 | 972 | 2020-01-14 10:54:41.886000+00:00 |
3 | view_item | 550 | 972 | 2020-01-14 10:54:47.692000+00:00 |
4 | add_to_cart | 550 | 972 | 2020-01-14 10:54:48.479000+00:00 |
#collapse
data_w['weight'] = data_w['event'].apply(lambda x: affinity_weights[x])
data_wcount = data_w.groupby(['userid', 'itemid'])['weight'].sum().reset_index()
data_wcount.columns = ['userid', 'itemid', 'affinity']
data_wcount.head()
userid | itemid | affinity | |
---|---|---|---|
0 | 0 | 328 | 6 |
1 | 1 | 1122 | 6 |
2 | 1 | 1204 | 6 |
3 | 1 | 1271 | 6 |
4 | 1 | 1821 | 6 |
#hide
T = 30
t_ref = datetime.datetime.utcnow()
data_w['timedecay'] = data_w.apply(
lambda x: x['weight'] * math.exp(-math.log2((t_ref - pd.to_datetime(x['timestamp']).tz_convert(None)).days / T)),
axis=1
)
data_w.head()
event | userid | itemid | timestamp | weight | timedecay | |
---|---|---|---|---|---|---|
0 | view_item | 3141 | 236 | 2020-01-13 16:05:31.244000+00:00 | 1 | 0.019056 |
1 | add_to_cart | 3421 | 1001 | 2020-01-13 22:36:38.680000+00:00 | 3 | 0.057167 |
2 | view_item | 550 | 972 | 2020-01-14 10:54:41.886000+00:00 | 1 | 0.019056 |
3 | view_item | 550 | 972 | 2020-01-14 10:54:47.692000+00:00 | 1 | 0.019056 |
4 | add_to_cart | 550 | 972 | 2020-01-14 10:54:48.479000+00:00 | 3 | 0.057167 |
#collapse
data_wt = data_w.groupby(['userid', 'itemid'])['timedecay'].sum().reset_index()
data_wt.columns = ['userid', 'itemid', 'affinity']
data_wt.head()
userid | itemid | affinity | |
---|---|---|---|
0 | 0 | 328 | 0.117590 |
1 | 1 | 1122 | 0.120232 |
2 | 1 | 1204 | 0.120232 |
3 | 1 | 1271 | 0.120232 |
4 | 1 | 1821 | 0.120232 |
Option 1 - Random Split: Random split simply takes in a data set and outputs the splits of the data, given the split ratios
Option 2 - Chronological split: Chronogically splitting method takes in a dataset and splits it on timestamp
#collapse
data = data_w[['userid','itemid','timedecay','timestamp']]
col = {
'col_user': 'userid',
'col_item': 'itemid',
'col_rating': 'timedecay',
'col_timestamp': 'timestamp',
}
col3 = {
'col_user': 'userid',
'col_item': 'itemid',
'col_timestamp': 'timestamp',
}
train, test = python_chrono_split(data, ratio=0.75, min_rating=10,
filter_by='user', **col3)
train.loc[train.userid==7,:]
userid | itemid | timedecay | timestamp | |
---|---|---|---|---|
16679 | 7 | 1464 | 0.019174 | 2020-01-16 06:42:31.341000+00:00 |
16691 | 7 | 1464 | 0.019174 | 2020-01-16 06:43:29.482000+00:00 |
16692 | 7 | 2109 | 0.019174 | 2020-01-16 06:43:42.262000+00:00 |
16694 | 7 | 1464 | 0.019174 | 2020-01-16 06:43:57.961000+00:00 |
16805 | 7 | 201 | 0.019174 | 2020-01-16 06:45:55.261000+00:00 |
16890 | 7 | 2570 | 0.019174 | 2020-01-16 06:54:12.315000+00:00 |
16999 | 7 | 2570 | 0.019174 | 2020-01-16 06:54:29.130000+00:00 |
17000 | 7 | 2570 | 0.057522 | 2020-01-16 06:54:35.097000+00:00 |
test.loc[test.userid==7,:]
userid | itemid | timedecay | timestamp | |
---|---|---|---|---|
17001 | 7 | 1464 | 0.019174 | 2020-01-16 06:54:41.415000+00:00 |
17003 | 7 | 1464 | 0.057522 | 2020-01-16 06:54:44.195000+00:00 |
#hide
# Recommending the most popular items is intuitive and simple approach
item_counts = train['itemid'].value_counts().to_frame().reset_index()
item_counts.columns = ['itemid', 'count']
item_counts.head()
itemid | count | |
---|---|---|
0 | 2564 | 461 |
1 | 1463 | 302 |
2 | 1710 | 267 |
3 | 1985 | 243 |
4 | 886 | 229 |
#hide
user_item_col = ['userid', 'itemid']
# Cross join users and items
test_users = test['userid'].unique()
user_item_list = list(itertools.product(test_users, item_counts['itemid']))
users_items = pd.DataFrame(user_item_list, columns=user_item_col)
print("Number of user-item pairs:", len(users_items))
# Remove seen items (items in the train set) as we will not recommend those again to the users
from reco_utils.dataset.pandas_df_utils import filter_by
users_items_remove_seen = filter_by(users_items, train, user_item_col)
print("After remove seen items:", len(users_items_remove_seen))
Number of user-item pairs: 4124250 After remove seen items: 4107466
# Generate recommendations
baseline_recommendations = pd.merge(item_counts, users_items_remove_seen,
on=['itemid'], how='inner')
baseline_recommendations.head()
itemid | count | userid | |
---|---|---|---|
0 | 2564 | 461 | 7 |
1 | 2564 | 461 | 21 |
2 | 2564 | 461 | 73 |
3 | 2564 | 461 | 75 |
4 | 2564 | 461 | 113 |
#hide
k = 10
cols = {
'col_user': 'userid',
'col_item': 'itemid',
'col_rating': 'timedecay',
'col_prediction': 'count',
}
eval_map = map_at_k(test, baseline_recommendations, k=k, **cols)
eval_ndcg = ndcg_at_k(test, baseline_recommendations, k=k, **cols)
eval_precision = precision_at_k(test, baseline_recommendations, k=k, **cols)
eval_recall = recall_at_k(test, baseline_recommendations, k=k, **cols)
print("MAP:\t%f" % eval_map,
"NDCG@K:\t%f" % eval_ndcg,
"Precision@K:\t%f" % eval_precision,
"Recall@K:\t%f" % eval_recall, sep='\n')
MAP: 0.005334 NDCG@K: 0.010356 Precision@K: 0.007092 Recall@K: 0.011395
#hide
TOP_K = 10
NUM_FACTORS = 200
NUM_EPOCHS = 100
SEED = 40
train_set = cornac.data.Dataset.from_uir(train.itertuples(index=False), seed=SEED)
bpr = cornac.models.BPR(
k=NUM_FACTORS,
max_iter=NUM_EPOCHS,
learning_rate=0.01,
lambda_reg=0.001,
verbose=True,
seed=SEED
)
from reco_utils.common.timer import Timer
with Timer() as t:
bpr.fit(train_set)
print("Took {} seconds for training.".format(t))
HBox(children=(FloatProgress(value=0.0), HTML(value='')))
Optimization finished! Took 3.1812 seconds for training.
#hide
with Timer() as t:
all_predictions = predict_ranking(bpr, train, usercol='userid', itemcol='itemid', remove_seen=True)
print("Took {} seconds for prediction.".format(t))
Took 4.7581 seconds for prediction.
all_predictions.head()
userid | itemid | prediction | |
---|---|---|---|
51214 | 7 | 2551 | -0.438445 |
51215 | 7 | 481 | 2.522187 |
51216 | 7 | 1185 | 2.406107 |
51217 | 7 | 1766 | 1.112975 |
51218 | 7 | 1359 | 2.083620 |
#hide
k = 10
cols = {
'col_user': 'userid',
'col_item': 'itemid',
'col_rating': 'timedecay',
'col_prediction': 'prediction',
}
eval_map = map_at_k(test, all_predictions, k=k, **cols)
eval_ndcg = ndcg_at_k(test, all_predictions, k=k, **cols)
eval_precision = precision_at_k(test, all_predictions, k=k, **cols)
eval_recall = recall_at_k(test, all_predictions, k=k, **cols)
#hide-input
print("MAP:\t%f" % eval_map,
"NDCG:\t%f" % eval_ndcg,
"Precision@K:\t%f" % eval_precision,
"Recall@K:\t%f" % eval_recall, sep='\n')
MAP: 0.004738 NDCG: 0.009597 Precision@K: 0.006601 Recall@K: 0.010597
#collapse
from reco_utils.recommender.sar.sar_singlenode import SARSingleNode
TOP_K = 10
header = {
"col_user": "userid",
"col_item": "itemid",
"col_rating": "timedecay",
"col_timestamp": "timestamp",
"col_prediction": "prediction",
}
model = SARSingleNode(
similarity_type="jaccard",
time_decay_coefficient=0,
time_now=None,
timedecay_formula=False,
**header
)
model.fit(train)
#hide
top_k = model.recommend_k_items(test, remove_seen=True)
# all ranking metrics have the same arguments
args = [test, top_k]
kwargs = dict(col_user='userid',
col_item='itemid',
col_rating='timedecay',
col_prediction='prediction',
relevancy_method='top_k',
k=TOP_K)
eval_map = map_at_k(*args, **kwargs)
eval_ndcg = ndcg_at_k(*args, **kwargs)
eval_precision = precision_at_k(*args, **kwargs)
eval_recall = recall_at_k(*args, **kwargs)
#hide-input
print(f"Model:",
f"Top K:\t\t {TOP_K}",
f"MAP:\t\t {eval_map:f}",
f"NDCG:\t\t {eval_ndcg:f}",
f"Precision@K:\t {eval_precision:f}",
f"Recall@K:\t {eval_recall:f}", sep='\n')
Model: Top K: 10 MAP: 0.024426 NDCG: 0.032738 Precision@K: 0.019258 Recall@K: 0.036009
#collapse
interactions = Interactions(user_ids = df.userid.astype('int32').values,
item_ids = df.itemid.astype('int32').values,
timestamps = df.timestamp.astype('int32'),
num_users = df.userid.nunique(),
num_items = df.itemid.nunique())
train_user, test_user = random_train_test_split(interactions, test_percentage=0.2)
model = ImplicitFactorizationModel(loss='bpr', embedding_dim=64, n_iter=10,
batch_size=256, l2=0.0, learning_rate=0.01,
optimizer_func=None, use_cuda=False,
representation=None, sparse=False,
num_negative_samples=10)
model.fit(train_user, verbose=1)
pr = precision_recall_score(model, test=test_user, train=train_user, k=10)
print('Pricison@10 is {:.3f} and Recall@10 is {:.3f}'.format(pr[0].mean(), pr[1].mean()))
Epoch 0: loss 0.26659833122392174 Epoch 1: loss 0.06129162273462562 Epoch 2: loss 0.022607273167640066 Epoch 3: loss 0.013953083943443858 Epoch 4: loss 0.01050195922488137 Epoch 5: loss 0.009170394043447121 Epoch 6: loss 0.008144461540834697 Epoch 7: loss 0.007209992620171649 Epoch 8: loss 0.00663076309035038 Epoch 9: loss 0.006706491189820159 Pricison@10 is 0.007 and Recall@10 is 0.050
Implicit Factorization Model with Grid Search
#hide
interactions = Interactions(user_ids = df.userid.astype('int32').values,
item_ids = df.itemid.astype('int32').values,
timestamps = df.timestamp.astype('int32'),
num_users = df.userid.nunique(),
num_items = df.itemid.nunique())
train_user, test_user = random_train_test_split(interactions, test_percentage=0.2)
params_grid = {'loss':['bpr', 'hinge'],
'embedding_dim':[32, 64],
'learning_rate': [0.01, 0.05, 0.1],
'num_negative_samples': [5,10,50]
}
grid = ParameterGrid(params_grid)
for p in grid:
model = ImplicitFactorizationModel(**p, n_iter=10, batch_size=256, l2=0.0,
optimizer_func=None, use_cuda=False,
representation=None, sparse=False)
model.fit(train_user, verbose=1)
pr = precision_recall_score(model, test=test_user, train=train_user, k=10)
print('Pricison@10 is {:.3f} and Recall@10 is {:.3f}'.format(pr[0].mean(), pr[1].mean()))
/usr/local/lib/python3.7/dist-packages/pandas/core/series.py:1139: FutureWarning: Passing list-likes to .loc or [] with any missing label will raise KeyError in the future, you can use .reindex() as an alternative. See the documentation here: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#deprecate-loc-reindex-listlike return self.loc[key]
Epoch 0: loss 0.29605319564173843 Epoch 1: loss 0.09024034632172231 Epoch 2: loss 0.04134876810469428 Epoch 3: loss 0.025742946812505697 Epoch 4: loss 0.019680531030682506 Epoch 5: loss 0.016188400935297803 Epoch 6: loss 0.013594527123945127 Epoch 7: loss 0.012133106459800596 Epoch 8: loss 0.011063214711144184 Epoch 9: loss 0.010611439098295579 Pricison@10 is 0.007 and Recall@10 is 0.047 Epoch 0: loss 0.29609743690184076 Epoch 1: loss 0.09252797421726766 Epoch 2: loss 0.04244603450826318 Epoch 3: loss 0.02713844289709709 Epoch 4: loss 0.01969600333667501 Epoch 5: loss 0.01603335079847808 Epoch 6: loss 0.014221997893418386 Epoch 7: loss 0.012988711276070673 Epoch 8: loss 0.011557109632238842 Epoch 9: loss 0.011109750410182539 Pricison@10 is 0.007 and Recall@10 is 0.050 Epoch 0: loss 0.2952524096827798 Epoch 1: loss 0.09147290865085132 Epoch 2: loss 0.0416853272766354 Epoch 3: loss 0.02605784918897309 Epoch 4: loss 0.019775863711413752 Epoch 5: loss 0.01594500694202313 Epoch 6: loss 0.014232497929574449 Epoch 7: loss 0.01294337215285353 Epoch 8: loss 0.011998023290540484 Epoch 9: loss 0.010945739889355717 Pricison@10 is 0.006 and Recall@10 is 0.045 Epoch 0: loss 0.46221058612084465 Epoch 1: loss 0.13077438028583188 Epoch 2: loss 0.07061944603680415 Epoch 3: loss 0.053692078858709795 Epoch 4: loss 0.045329024862510024 Epoch 5: loss 0.041573114408677225 Epoch 6: loss 0.0387881482244616 Epoch 7: loss 0.036440602202759007 Epoch 8: loss 0.035787560947095655 Epoch 9: loss 0.03322120658586167 Pricison@10 is 0.006 and Recall@10 is 0.041 Epoch 0: loss 0.4623263587522353 Epoch 1: loss 0.1317922399166696 Epoch 2: loss 0.07110214101635758 Epoch 3: loss 0.055422348614147236 Epoch 4: loss 0.0476353759083813 Epoch 5: loss 0.04135727373302175 Epoch 6: loss 0.04034828715972674 Epoch 7: loss 0.037419761230097034 Epoch 8: loss 0.03681224007818285 Epoch 9: loss 0.03620119842961191 Pricison@10 is 0.005 and Recall@10 is 0.039 Epoch 0: loss 0.46284369990181695 Epoch 1: loss 0.1312407074777643 Epoch 2: loss 0.070646146030863 Epoch 3: loss 0.05477426459240185 Epoch 4: loss 0.04863271401697991 Epoch 5: loss 0.041741091150107684 Epoch 6: loss 0.03963903292947692 Epoch 7: loss 0.036970180826159826 Epoch 8: loss 0.035346197669612923 Epoch 9: loss 0.03657649763952641 Pricison@10 is 0.006 and Recall@10 is 0.041 Epoch 0: loss 0.1972397086704659 Epoch 1: loss 0.06829694751949555 Epoch 2: loss 0.05330947832087612 Epoch 3: loss 0.04971556666148437 Epoch 4: loss 0.046018893009956055 Epoch 5: loss 0.045985528797965344 Epoch 6: loss 0.044439963151931376 Epoch 7: loss 0.04240149582385825 Epoch 8: loss 0.04087006046445711 Epoch 9: loss 0.04057819234848118 Pricison@10 is 0.004 and Recall@10 is 0.025 Epoch 0: loss 0.19465743656902068 Epoch 1: loss 0.0670443969775722 Epoch 2: loss 0.05303467901788916 Epoch 3: loss 0.04968067335258343 Epoch 4: loss 0.046125373380839635 Epoch 5: loss 0.04318332981461977 Epoch 6: loss 0.042641045980850216 Epoch 7: loss 0.04208077507698459 Epoch 8: loss 0.041522538493060986 Epoch 9: loss 0.040731122471703594 Pricison@10 is 0.004 and Recall@10 is 0.029 Epoch 0: loss 0.19681165678803944 Epoch 1: loss 0.0664056163892102 Epoch 2: loss 0.0535869363072505 Epoch 3: loss 0.04857526848338233 Epoch 4: loss 0.048490358762131626 Epoch 5: loss 0.044566346811567854 Epoch 6: loss 0.04327488357015553 Epoch 7: loss 0.042236022570341154 Epoch 8: loss 0.04146730487453497 Epoch 9: loss 0.040008874127934795 Pricison@10 is 0.004 and Recall@10 is 0.026 Epoch 0: loss 0.5279965880791091 Epoch 1: loss 0.452307569587729 Epoch 2: loss 0.4294101142614984 Epoch 3: loss 0.42500076900532774 Epoch 4: loss 0.4363042430239475 Epoch 5: loss 0.4657211991032987 Epoch 6: loss 0.42073505898859725 Epoch 7: loss 0.42356977115873357 Epoch 8: loss 0.45585063269954784 Epoch 9: loss 0.43355618334611895 Pricison@10 is 0.004 and Recall@10 is 0.027 Epoch 0: loss 0.5260213470727301 Epoch 1: loss 0.45165729455625897 Epoch 2: loss 0.4297575738843998 Epoch 3: loss 0.42066948568610135 Epoch 4: loss 0.4312238985031747 Epoch 5: loss 0.42702193119805726 Epoch 6: loss 0.444290085740603 Epoch 7: loss 0.43324954985039027 Epoch 8: loss 0.4306122352312232 Epoch 9: loss 0.4415178039856854 Pricison@10 is 0.005 and Recall@10 is 0.031 Epoch 0: loss 0.5295544630844876 Epoch 1: loss 0.4380661493041508 Epoch 2: loss 0.42663705986220735 Epoch 3: loss 0.44285977188605585 Epoch 4: loss 0.4236099698440055 Epoch 5: loss 0.44194267405095206 Epoch 6: loss 0.424336733338438 Epoch 7: loss 0.4380265812590191 Epoch 8: loss 0.434472453364222 Epoch 9: loss 0.43545967928610047 Pricison@10 is 0.004 and Recall@10 is 0.030 Epoch 0: loss 0.2234289297816071 Epoch 1: loss 0.1279702181074397 Epoch 2: loss 0.11148615261701525 Epoch 3: loss 0.10850249113473096 Epoch 4: loss 0.10298196010746756 Epoch 5: loss 0.09748751210150611 Epoch 6: loss 0.0969734352645 Epoch 7: loss 0.0931075753170961 Epoch 8: loss 0.0915131751699463 Epoch 9: loss 0.0897374461102524 Pricison@10 is 0.003 and Recall@10 is 0.018 Epoch 0: loss 0.2253206785756292 Epoch 1: loss 0.13100997741870174 Epoch 2: loss 0.11400738520206362 Epoch 3: loss 0.10849933280726337 Epoch 4: loss 0.10491025197496368 Epoch 5: loss 0.10026714504004675 Epoch 6: loss 0.09795559029966305 Epoch 7: loss 0.0932772220426817 Epoch 8: loss 0.09295417268991087 Epoch 9: loss 0.09168372073261684 Pricison@10 is 0.004 and Recall@10 is 0.027 Epoch 0: loss 0.22577087595531795 Epoch 1: loss 0.12992173730368783 Epoch 2: loss 0.11640790809197442 Epoch 3: loss 0.10946303474270645 Epoch 4: loss 0.10366896155419074 Epoch 5: loss 0.10071416692718432 Epoch 6: loss 0.09589668794556062 Epoch 7: loss 0.09583283946445134 Epoch 8: loss 0.09371932657058216 Epoch 9: loss 0.09230355569835261 Pricison@10 is 0.003 and Recall@10 is 0.024 Epoch 0: loss 1.2916136291057734 Epoch 1: loss 1.7776802640252558 Epoch 2: loss 1.7409840778139243 Epoch 3: loss 1.687181089085398 Epoch 4: loss 1.7308643511062267 Epoch 5: loss 1.6736974808180831 Epoch 6: loss 1.651738966392934 Epoch 7: loss 1.7159456276241989 Epoch 8: loss 1.6285421782655347 Epoch 9: loss 1.763627714761967 Pricison@10 is 0.004 and Recall@10 is 0.026 Epoch 0: loss 1.294814380993797 Epoch 1: loss 1.7931470209762599 Epoch 2: loss 1.7688716473686734 Epoch 3: loss 1.755566406767468 Epoch 4: loss 1.6464107822375282 Epoch 5: loss 1.703617206340434 Epoch 6: loss 1.6874780139355798 Epoch 7: loss 1.6423051294981474 Epoch 8: loss 1.628277428663812 Epoch 9: loss 1.6777792075630935 Pricison@10 is 0.004 and Recall@10 is 0.025 Epoch 0: loss 1.3243095058336902 Epoch 1: loss 1.7953596373846292 Epoch 2: loss 1.72956018528371 Epoch 3: loss 1.743988018614686 Epoch 4: loss 1.6520857222593865 Epoch 5: loss 1.6576296872073049 Epoch 6: loss 1.6600280645575938 Epoch 7: loss 1.6721530433061422 Epoch 8: loss 1.7022247500358287 Epoch 9: loss 1.6738990024930027 Pricison@10 is 0.004 and Recall@10 is 0.029 Epoch 0: loss 0.26678223316213323 Epoch 1: loss 0.0612365399933513 Epoch 2: loss 0.022778521771048617 Epoch 3: loss 0.013828953936204938 Epoch 4: loss 0.010637697962882338 Epoch 5: loss 0.00934013093064069 Epoch 6: loss 0.007684989493951153 Epoch 7: loss 0.007701974666836129 Epoch 8: loss 0.006867655064054193 Epoch 9: loss 0.00671699859270784 Pricison@10 is 0.008 and Recall@10 is 0.051 Epoch 0: loss 0.267548588622613 Epoch 1: loss 0.06067183332883085 Epoch 2: loss 0.02281374950560822 Epoch 3: loss 0.013809392712351497 Epoch 4: loss 0.010587326816950963 Epoch 5: loss 0.008792504775292956 Epoch 6: loss 0.008009265757156865 Epoch 7: loss 0.0074565373499611235 Epoch 8: loss 0.007098590578192132 Epoch 9: loss 0.00681110867523735 Pricison@10 is 0.008 and Recall@10 is 0.053 Epoch 0: loss 0.26481677225260875 Epoch 1: loss 0.06162541927440374 Epoch 2: loss 0.022879681100775384 Epoch 3: loss 0.01433396209300858 Epoch 4: loss 0.010934004527920935 Epoch 5: loss 0.00929655465282428 Epoch 6: loss 0.008289361950549091 Epoch 7: loss 0.00740667303029891 Epoch 8: loss 0.006837165096948384 Epoch 9: loss 0.006828171610562699 Pricison@10 is 0.007 and Recall@10 is 0.053 Epoch 0: loss 0.424381799061582 Epoch 1: loss 0.08664971538534885 Epoch 2: loss 0.04589304513630376 Epoch 3: loss 0.038735897107337304 Epoch 4: loss 0.033811092456066054 Epoch 5: loss 0.03218233990677635 Epoch 6: loss 0.03144083229279882 Epoch 7: loss 0.030326535367523694 Epoch 8: loss 0.03039964140839036 Epoch 9: loss 0.0313607786476636 Pricison@10 is 0.006 and Recall@10 is 0.044 Epoch 0: loss 0.4254932851078426 Epoch 1: loss 0.08756166415055465 Epoch 2: loss 0.04538155602548762 Epoch 3: loss 0.03897529531126524 Epoch 4: loss 0.03492546319015252 Epoch 5: loss 0.03294036035512805 Epoch 6: loss 0.03329139371130053 Epoch 7: loss 0.03167530871969663 Epoch 8: loss 0.032296937689712195 Epoch 9: loss 0.032698962740881604 Pricison@10 is 0.006 and Recall@10 is 0.041 Epoch 0: loss 0.42160729819555376 Epoch 1: loss 0.08756704588938756 Epoch 2: loss 0.04579582711205581 Epoch 3: loss 0.04007043815861753 Epoch 4: loss 0.034128753362085755 Epoch 5: loss 0.03233827561382623 Epoch 6: loss 0.03263001367180246 Epoch 7: loss 0.03131351158415174 Epoch 8: loss 0.030942449880721964 Epoch 9: loss 0.030438972015676486 Pricison@10 is 0.006 and Recall@10 is 0.042 Epoch 0: loss 0.1919897337866366 Epoch 1: loss 0.062042856169714805 Epoch 2: loss 0.051357087131194364 Epoch 3: loss 0.04529016200824374 Epoch 4: loss 0.04487682666354049 Epoch 5: loss 0.04414309574428862 Epoch 6: loss 0.04504055042310926 Epoch 7: loss 0.04177342097116245 Epoch 8: loss 0.04100956085027223 Epoch 9: loss 0.03946466099506789 Pricison@10 is 0.004 and Recall@10 is 0.024 Epoch 0: loss 0.19505955873003344 Epoch 1: loss 0.06246872200148473 Epoch 2: loss 0.05016463761400563 Epoch 3: loss 0.04570818116882415 Epoch 4: loss 0.04542697983467502 Epoch 5: loss 0.04463409970535438 Epoch 6: loss 0.04135937661747074 Epoch 7: loss 0.04026785464747734 Epoch 8: loss 0.039887369514541804 Epoch 9: loss 0.03812464036442747 Pricison@10 is 0.004 and Recall@10 is 0.030 Epoch 0: loss 0.1928439362399831 Epoch 1: loss 0.061154625255770236 Epoch 2: loss 0.05057755473797536 Epoch 3: loss 0.04760763201251674 Epoch 4: loss 0.0465072814878256 Epoch 5: loss 0.0445218765060043 Epoch 6: loss 0.04306420925824972 Epoch 7: loss 0.04141842263639932 Epoch 8: loss 0.03885935280783479 Epoch 9: loss 0.03769551119632372 Pricison@10 is 0.005 and Recall@10 is 0.032 Epoch 0: loss 0.6255892824705007 Epoch 1: loss 0.5785694408119683 Epoch 2: loss 0.5343080787796682 Epoch 3: loss 0.5397515998392627 Epoch 4: loss 0.5448340573063617 Epoch 5: loss 0.5294443992029433 Epoch 6: loss 0.5306488025118972 Epoch 7: loss 0.4954649354915144 Epoch 8: loss 0.5235111305603931 Epoch 9: loss 0.5053859731583733 Pricison@10 is 0.004 and Recall@10 is 0.029 Epoch 0: loss 0.6326714631445538 Epoch 1: loss 0.573512017918553 Epoch 2: loss 0.5273089357127714 Epoch 3: loss 0.5472853745654274 Epoch 4: loss 0.5311281712923402 Epoch 5: loss 0.5377801821785725 Epoch 6: loss 0.5537312353970154 Epoch 7: loss 0.5367784741966479 Epoch 8: loss 0.47841025037253787 Epoch 9: loss 0.5312429334357429 Pricison@10 is 0.004 and Recall@10 is 0.029 Epoch 0: loss 0.6268260534553283 Epoch 1: loss 0.5624153050579058 Epoch 2: loss 0.5298431843183815 Epoch 3: loss 0.5396579839888109 Epoch 4: loss 0.5381371908105455 Epoch 5: loss 0.5168753017782207 Epoch 6: loss 0.48948019892696015 Epoch 7: loss 0.4905602858189218 Epoch 8: loss 0.5053348687396555 Epoch 9: loss 0.5373068574181132 Pricison@10 is 0.004 and Recall@10 is 0.029 Epoch 0: loss 0.23225398366474262 Epoch 1: loss 0.13146403681043645 Epoch 2: loss 0.11786584738174818 Epoch 3: loss 0.10868372224343167 Epoch 4: loss 0.10243410413479882 Epoch 5: loss 0.09941196136200543 Epoch 6: loss 0.09679907708880986 Epoch 7: loss 0.09597368322768011 Epoch 8: loss 0.09440059982647467 Epoch 9: loss 0.09285185775956158 Pricison@10 is 0.003 and Recall@10 is 0.023 Epoch 0: loss 0.23057798509429123 Epoch 1: loss 0.13307480200214786 Epoch 2: loss 0.11867008248038614 Epoch 3: loss 0.11186671285767263 Epoch 4: loss 0.10736680121858787 Epoch 5: loss 0.10359690031725494 Epoch 6: loss 0.10164711832808529 Epoch 7: loss 0.09720324220620934 Epoch 8: loss 0.09211324512287734 Epoch 9: loss 0.09142769774919154 Pricison@10 is 0.003 and Recall@10 is 0.018 Epoch 0: loss 0.23323329780071111 Epoch 1: loss 0.13417892158031464 Epoch 2: loss 0.11723061798326072 Epoch 3: loss 0.10971038677157696 Epoch 4: loss 0.1073669156125504 Epoch 5: loss 0.10198603978925579 Epoch 6: loss 0.10140255498445302 Epoch 7: loss 0.09986353406856298 Epoch 8: loss 0.09296791315366218 Epoch 9: loss 0.09289196610354918 Pricison@10 is 0.004 and Recall@10 is 0.026 Epoch 0: loss 1.8212750715074815 Epoch 1: loss 2.368166323068441 Epoch 2: loss 2.2547610501767736 Epoch 3: loss 2.0896969359978987 Epoch 4: loss 2.074246685221264 Epoch 5: loss 2.107905259206172 Epoch 6: loss 2.1261368730252195 Epoch 7: loss 2.0352458648168006 Epoch 8: loss 2.1936914333384903 Epoch 9: loss 2.0269214924412906 Pricison@10 is 0.004 and Recall@10 is 0.031 Epoch 0: loss 1.8324132722673692 Epoch 1: loss 2.4329963008307183 Epoch 2: loss 2.2162452385164917 Epoch 3: loss 2.092274981104676 Epoch 4: loss 2.1043862517432 Epoch 5: loss 2.0506169550671838 Epoch 6: loss 2.1609063529412462 Epoch 7: loss 2.1431561312683143 Epoch 8: loss 2.0215363380322504 Epoch 9: loss 1.9930379555180333 Pricison@10 is 0.004 and Recall@10 is 0.028 Epoch 0: loss 1.8448880979869144 Epoch 1: loss 2.378598579448136 Epoch 2: loss 2.296310121415129 Epoch 3: loss 2.131826051657606 Epoch 4: loss 2.101037339573888 Epoch 5: loss 2.0981224655530077 Epoch 6: loss 2.1441538588793714 Epoch 7: loss 2.014261698416192 Epoch 8: loss 1.9355182779228188 Epoch 9: loss 2.084795298492027 Pricison@10 is 0.004 and Recall@10 is 0.030
#collapse
interactions = Interactions(user_ids = df.userid.astype('int32').values,
item_ids = df.itemid.astype('int32').values+1,
timestamps = df.timestamp.astype('int32'))
train, test = random_train_test_split(interactions, test_percentage=0.2)
train_seq = train.to_sequence(max_sequence_length=10)
test_seq = test.to_sequence(max_sequence_length=10)
model = ImplicitSequenceModel(loss='bpr', representation='pooling',
embedding_dim=32, n_iter=10, batch_size=256,
l2=0.0, learning_rate=0.01, optimizer_func=None,
use_cuda=False, sparse=False, num_negative_samples=5)
model.fit(train_seq, verbose=1)
mrr_seq = sequence_mrr_score(model, test_seq)
mrr_seq.mean()
Epoch 0: loss 0.4226887328702895 Epoch 1: loss 0.23515070266410953 Epoch 2: loss 0.16919970976524665 Epoch 3: loss 0.1425025990751923 Epoch 4: loss 0.12612225017586692 Epoch 5: loss 0.11565039795441706 Epoch 6: loss 0.10787886735357222 Epoch 7: loss 0.10086931410383006 Epoch 8: loss 0.09461003749585542 Epoch 9: loss 0.09128284808553633
0.10435609591957387
#hide
df['rating'] = df['event'].map({'view_item': 1,
'add_to_cart': 2,
'begin_checkout': 3,
'purchase': 5,
'remove_from_cart': 0,
})
ratings = df[["userid", 'itemid', "rating", 'timestamp']].copy()
data = CollabDataBunch.from_df(ratings, seed=42)
data
TabularDataBunch; Train: LabelList (79546 items) x: CollabList userid 3141; itemid 236; ,userid 3421; itemid 1001; ,userid 550; itemid 972; ,userid 550; itemid 972; ,userid 550; itemid 972; y: FloatList 1.0,2.0,1.0,1.0,2.0 Path: .; Valid: LabelList (19886 items) x: CollabList userid 6785; itemid 183; ,userid 1458; itemid 1356; ,userid 3817; itemid 2368; ,userid 9777; itemid 2466; ,userid 11077; itemid 1359; y: FloatList 1.0,3.0,3.0,1.0,2.0 Path: .; Test: None
#hide
learn = collab_learner(data, n_factors=50, y_range=[0,5.5])
learn.lr_find()
learn.recorder.plot(skip_end=15)
epoch | train_loss | valid_loss | time |
---|
LR Finder is complete, type {learner_name}.recorder.plot() to see the graph.
learn.fit_one_cycle(1, 5e-6)
epoch | train_loss | valid_loss | time |
---|---|---|---|
0 | 2.054070 | 2.029182 | 00:20 |
learn.summary()
EmbeddingDotBias ====================================================================== Layer (type) Output Shape Param # Trainable ====================================================================== Embedding [50] 534,000 True ______________________________________________________________________ Embedding [50] 129,150 True ______________________________________________________________________ Embedding [1] 10,680 True ______________________________________________________________________ Embedding [1] 2,583 True ______________________________________________________________________ Total params: 676,413 Total trainable params: 676,413 Total non-trainable params: 0 Optimized with 'torch.optim.adam.Adam', betas=(0.9, 0.99) Using true weight decay as discussed in https://www.fast.ai/2018/07/02/adam-weight-decay/ Loss function : FlattenedLoss ====================================================================== Callbacks functions applied
learn.fit(10, 1e-3)
epoch | train_loss | valid_loss | time |
---|---|---|---|
0 | 1.770657 | 1.751797 | 00:18 |
1 | 1.410351 | 1.528533 | 00:17 |
2 | 1.153979 | 1.399136 | 00:17 |
3 | 0.911953 | 1.326476 | 00:17 |
4 | 0.784223 | 1.279517 | 00:17 |
5 | 0.695546 | 1.248469 | 00:17 |
6 | 0.637151 | 1.230954 | 00:18 |
7 | 0.600011 | 1.216617 | 00:18 |
8 | 0.573309 | 1.209507 | 00:18 |
9 | 0.571132 | 1.204903 | 00:18 |