Collaborative filtering with side information


This IPython notebook illustrates the usage of the cmfrec Python package for building recommender systems through different matrix factorization models with or without using information about user and item attributes – for more details see the references at the bottom.

The example uses the MovieLens-1M data which consists of ratings from users about movies + user demographic information, plus the movie tag genome. Note however that, for implicit-feedback datasets (e.g. item purchases), it's recommended to use different models than the ones shown here (see documentation for details about models in the package aimed at implicit-feedback data).

Small note: if the TOC here is not clickable or the math symbols don't show properly, try visualizing this same notebook from nbviewer following this link.

Sections

1. Loading the data

2. Fitting recommender models

3. Examining top-N recommended lists

4. Tuning model parameters

5. Recommendations for new users

6. Evaluating models

7. Adding implicit features and dynamic regularization

8. References


1. Loading the data

This section uses pre-processed data from the MovieLens datasets joined with external zip codes databases. The script for processing and cleaning the data can be found in another notebook here.

In [1]:
import numpy as np, pandas as pd, pickle

ratings = pickle.load(open("ratings.p", "rb"))
item_sideinfo_pca = pickle.load(open("item_sideinfo_pca.p", "rb"))
user_side_info = pickle.load(open("user_side_info.p", "rb"))
movie_id_to_title = pickle.load(open("movie_id_to_title.p", "rb"))

Ratings data

In [2]:
ratings.head()
Out[2]:
UserId ItemId Rating
0 1 1193 5
1 1 661 3
2 1 914 3
3 1 3408 4
4 1 2355 5

Item attributes (reduced through PCA)

In [3]:
item_sideinfo_pca.head()
Out[3]:
ItemId pc0 pc1 pc2 pc3 pc4 pc5 pc6 pc7 pc8 ... pc40 pc41 pc42 pc43 pc44 pc45 pc46 pc47 pc48 pc49
0 1 1.192433 2.034965 2.679781 1.154823 0.715302 0.982528 1.251208 -0.792800 1.605826 ... -0.312568 -0.089161 -0.053227 0.230116 0.210211 0.098109 -0.267214 -0.191760 0.032658 0.065116
1 2 -1.333200 1.719346 1.383137 0.788332 -0.487431 0.376546 0.803104 -0.606602 0.914494 ... 0.265190 -0.294507 0.058127 0.013155 0.232314 0.332297 0.271467 0.112416 -0.111115 -0.042173
2 3 -1.363421 -0.034093 0.528633 -0.312122 0.468820 0.164593 0.021909 0.161554 -0.231992 ... 0.212216 -0.103897 -0.279957 0.032861 0.054336 0.212665 -0.174429 -0.105532 -0.147704 0.137516
3 4 -1.238094 -1.014399 0.790394 -0.296004 -0.095043 -0.052266 -0.180244 -0.768811 -0.400559 ... 0.074246 0.033976 -0.225773 0.416155 0.282287 -0.324412 -0.228171 -0.191667 -0.488943 -0.468794
4 5 -1.613220 -0.280142 1.119149 -0.130238 0.397091 0.187158 0.108864 -0.273748 -0.260166 ... 0.110984 -0.126241 -0.234988 0.487649 -0.027990 0.103862 -0.218475 -0.315778 -0.070719 0.052140

5 rows × 51 columns

User attributes (one-hot encoded)

In [4]:
user_side_info.head()
Out[4]:
UserId Gender_F Gender_M Age_1 Age_18 Age_25 Age_35 Age_45 Age_50 Age_56 ... Occupation_unemployed Occupation_writer Region_Middle Atlantic Region_Midwest Region_New England Region_South Region_Southwest Region_UnknownOrNonUS Region_UsOther Region_West
0 1 1 0 1 0 0 0 0 0 0 ... 0 0 0 1 0 0 0 0 0 0
1 2 0 1 0 0 0 0 0 0 1 ... 0 0 0 0 0 1 0 0 0 0
2 3 0 1 0 0 1 0 0 0 0 ... 0 0 0 1 0 0 0 0 0 0
3 4 0 1 0 0 0 0 1 0 0 ... 0 0 0 0 1 0 0 0 0 0
4 5 0 1 0 0 1 0 0 0 0 ... 0 1 0 1 0 0 0 0 0 0

5 rows × 39 columns

2. Fitting recommender models

This section fits different recommendation models and then compares the recommendations produced by them.

2.1 Classic model

Usual low-rank matrix factorization model with no user/item attributes: $$ \mathbf{X} \approx \mathbf{A} \mathbf{B}^T + \mu + \mathbf{b}_A + \mathbf{b}_B $$ Where

  • $\mathbf{X}$ is the ratings matrix, in which users are rows, items are columns, and the entries denote the ratings.
  • $\mathbf{A}$ is the user-factors matrix.
  • $\mathbf{B}$ is the item-factors matrix.
  • $\mu$ is the average rating.
  • $\mathbf{b}_A$ are user-specific biases (row vector).
  • $\mathbf{b}_B$ are item-specific biases (column vector).

(For more details see references at the bottom)

In [5]:
%%time
from cmfrec import CMF

model_no_sideinfo = CMF(method="als", k=40, lambda_=1e+1)
model_no_sideinfo.fit(ratings)
CPU times: user 13 s, sys: 105 ms, total: 13.1 s
Wall time: 892 ms
Out[5]:
Collective matrix factorization model
(explicit-feedback variant)

2.2 Collective model

The collective matrix factorization model extends the earlier model by making the user and item factor matrices also be able to make low-rank approximate factorizations of the user and item attributes: $$ \mathbf{X} \approx \mathbf{A} \mathbf{B}^T + \mu + \mathbf{b}_A + \mathbf{b}_B ,\:\:\:\: \mathbf{U} \approx \mathbf{A} \mathbf{C}^T + \mathbf{\mu}_U ,\:\:\:\: \mathbf{I} \approx \mathbf{B} \mathbf{D}^T + \mathbf{\mu}_I $$

Where

  • $\mathbf{U}$ is the user attributes matrix, in which users are rows and attributes are columns.
  • $\mathbf{I}$ is the item attributes matrix, in which items are rows and attributes are columns.
  • $\mathbf{\mu}_U$ are the column means for the user attributes (column vector).
  • $\mathbf{\mu}_I$ are the columns means for the item attributes (column vector).
  • $\mathbf{C}$ and $\mathbf{D}$ are attribute-factor matrices (also model parameters).

In addition, this package can also apply sigmoid transformations on the attribute columns which are binary. Note that this requires a different optimization approach which is slower than the ALS (alternating least-squares) method used here.

In [6]:
%%time
model_with_sideinfo = CMF(method="als", k=40, lambda_=1e+1, w_main=0.5, w_user=0.25, w_item=0.25)
model_with_sideinfo.fit(X=ratings, U=user_side_info, I=item_sideinfo_pca)

### for the sigmoid transformations:
# model_with_sideinfo = CMF(method="lbfgs", maxiter=0, k=40, lambda_=1e+1, w_main=0.5, w_user=0.25, w_item=0.25)
# model_with_sideinfo.fit(X=ratings, U_bin=user_side_info, I=item_sideinfo_pca)
CPU times: user 17.2 s, sys: 168 ms, total: 17.4 s
Wall time: 1.18 s
Out[6]:
Collective matrix factorization model
(explicit-feedback variant)

(Note that, since the side info has variables in a different scale, even though the weights sum up to 1, it's still not the same as the earlier model w.r.t. the regularization parameter - this type of model requires more hyperparameter tuning too.)

2.3 Content-based model

This is a model in which the factorizing matrices are constrained to be linear combinations of the user and item attributes, thereby making the recommendations based entirely on side information, with no free parameters for specific users or items: $$ \mathbf{X} \approx (\mathbf{U} \mathbf{C}) (\mathbf{I} \mathbf{D})^T + \mu $$

(Note that the movie attributes are not available for all the movies with ratings)

In [7]:
%%time
from cmfrec import ContentBased

model_content_based = ContentBased(k=40, maxiter=0, user_bias=False, item_bias=False)
model_content_based.fit(X=ratings.loc[ratings.ItemId.isin(item_sideinfo_pca.ItemId)],
                        U=user_side_info,
                        I=item_sideinfo_pca.loc[item_sideinfo_pca.ItemId.isin(ratings.ItemId)])
CPU times: user 26min 6s, sys: 9.69 s, total: 26min 16s
Wall time: 1min 39s
Out[7]:
Content-based factorization model
(explicit-feedback)

2.4 Non-personalized model

This is an intercepts-only version of the classical model, which estimates one parameter per user and one parameter per item, and as such produces a simple rank of the items based on those parameters. It is intended for comparison purposes and can be helpful to check that the recommendations for different users are having some variability (e.g. setting too large regularization values will tend to make all personalzied recommended lists similar to each other).

In [8]:
%%time
from cmfrec import MostPopular

model_non_personalized = MostPopular(user_bias=True, implicit=False)
model_non_personalized.fit(ratings)
CPU times: user 1.02 s, sys: 39.9 ms, total: 1.06 s
Wall time: 70.6 ms
Out[8]:
Most-Popular recommendation model
(explicit-feedback variant)

This section will examine what would each model recommend to the user with ID 948.

This is the demographic information for the user:

In [9]:
user_side_info.loc[user_side_info.UserId == 948].T.where(lambda x: x > 0).dropna()
Out[9]:
947
UserId 948.0
Gender_M 1.0
Age_56 1.0
Occupation_programmer 1.0
Region_Midwest 1.0

These are the highest-rated movies from the user:

In [10]:
ratings\
    .loc[ratings.UserId == 948]\
    .sort_values("Rating", ascending=False)\
    .assign(Movie=lambda x: x.ItemId.map(movie_id_to_title))\
    .head(10)
Out[10]:
UserId ItemId Rating Movie
146721 948 3789 5 Pawnbroker, The (1965)
146889 948 2665 5 Earth Vs. the Flying Saucers (1956)
146871 948 2640 5 Superman (1978)
146872 948 2641 5 Superman II (1980)
147105 948 2761 5 Iron Giant, The (1999)
146875 948 2644 5 Dracula (1931)
146878 948 2648 5 Frankenstein (1931)
147097 948 1019 5 20,000 Leagues Under the Sea (1954)
146881 948 2657 5 Rocky Horror Picture Show, The (1975)
146884 948 2660 5 Thing From Another World, The (1951)

These are the lowest-rated movies from the user:

In [11]:
ratings\
    .loc[ratings.UserId == 948]\
    .sort_values("Rating", ascending=True)\
    .assign(Movie=lambda x: x.ItemId.map(movie_id_to_title))\
    .head(10)
Out[11]:
UserId ItemId Rating Movie
147237 948 1247 1 Graduate, The (1967)
147173 948 70 1 From Dusk Till Dawn (1996)
146768 948 748 1 Arrival, The (1996)
147135 948 45 1 To Die For (1995)
146812 948 780 1 Independence Day (ID4) (1996)
146813 948 788 1 Nutty Professor, The (1996)
146814 948 3201 1 Five Easy Pieces (1970)
147118 948 356 1 Forrest Gump (1994)
146821 948 3070 1 Adventures of Buckaroo Bonzai Across the 8th D...
146822 948 1617 1 L.A. Confidential (1997)

Now producing recommendations from each model:

In [12]:
### Will exclude already-seen movies
exclude = ratings.ItemId.loc[ratings.UserId == 948]
exclude_cb = exclude.loc[exclude.isin(item_sideinfo_pca.ItemId)]

### Recommended lists with those excluded
recommended_non_personalized = model_non_personalized.topN(user=948, n=10, exclude=exclude)
recommended_no_side_info = model_no_sideinfo.topN(user=948, n=10, exclude=exclude)
recommended_with_side_info = model_with_sideinfo.topN(user=948, n=10, exclude=exclude)
recommended_content_based = model_content_based.topN(user=948, n=10, exclude=exclude_cb)
In [13]:
recommended_non_personalized
Out[13]:
array([2019,  318, 2905,  745, 1148, 1212, 3435,  923,  720, 3307])

A handy function to print top-N recommended lists with associated information:

In [14]:
from collections import defaultdict

# aggregate statistics
avg_movie_rating = defaultdict(lambda: 0)
num_ratings_per_movie = defaultdict(lambda: 0)
for i in ratings.groupby('ItemId')['Rating'].mean().to_frame().itertuples():
    avg_movie_rating[i.Index] = i.Rating
for i in ratings.groupby('ItemId')['Rating'].agg(lambda x: len(tuple(x))).to_frame().itertuples():
    num_ratings_per_movie[i.Index] = i.Rating

# function to print recommended lists more nicely
def print_reclist(reclist):
    list_w_info = [str(m + 1) + ") - " + movie_id_to_title[reclist[m]] +\
        " - Average Rating: " + str(np.round(avg_movie_rating[reclist[m]], 2))+\
        " - Number of ratings: " + str(num_ratings_per_movie[reclist[m]])\
                   for m in range(len(reclist))]
    print("\n".join(list_w_info))
    
print("Recommended from non-personalized model")
print_reclist(recommended_non_personalized)
print("----------------")
print("Recommended from ratings-only model")
print_reclist(recommended_no_side_info)
print("----------------")
print("Recommended from attributes-only model")
print_reclist(recommended_content_based)
print("----------------")
print("Recommended from hybrid model")
print_reclist(recommended_with_side_info)
Recommended from non-personalized model
1) - Seven Samurai (The Magnificent Seven) (Shichinin no samurai) (1954) - Average Rating: 4.56 - Number of ratings: 628
2) - Shawshank Redemption, The (1994) - Average Rating: 4.55 - Number of ratings: 2227
3) - Sanjuro (1962) - Average Rating: 4.61 - Number of ratings: 69
4) - Close Shave, A (1995) - Average Rating: 4.52 - Number of ratings: 657
5) - Wrong Trousers, The (1993) - Average Rating: 4.51 - Number of ratings: 882
6) - Third Man, The (1949) - Average Rating: 4.45 - Number of ratings: 480
7) - Double Indemnity (1944) - Average Rating: 4.42 - Number of ratings: 551
8) - Citizen Kane (1941) - Average Rating: 4.39 - Number of ratings: 1116
9) - Wallace & Gromit: The Best of Aardman Animation (1996) - Average Rating: 4.43 - Number of ratings: 438
10) - City Lights (1931) - Average Rating: 4.39 - Number of ratings: 271
----------------
Recommended from ratings-only model
1) - Babe (1995) - Average Rating: 3.89 - Number of ratings: 1751
2) - Singin' in the Rain (1952) - Average Rating: 4.28 - Number of ratings: 751
3) - Mummy, The (1932) - Average Rating: 3.54 - Number of ratings: 162
4) - Gold Rush, The (1925) - Average Rating: 4.19 - Number of ratings: 275
5) - Bride of Frankenstein (1935) - Average Rating: 3.91 - Number of ratings: 216
6) - City Lights (1931) - Average Rating: 4.39 - Number of ratings: 271
7) - Nosferatu (Nosferatu, eine Symphonie des Grauens) (1922) - Average Rating: 3.99 - Number of ratings: 238
8) - Wolf Man, The (1941) - Average Rating: 3.76 - Number of ratings: 134
9) - American History X (1998) - Average Rating: 4.23 - Number of ratings: 640
10) - Chariots of Fire (1981) - Average Rating: 3.8 - Number of ratings: 634
----------------
Recommended from attributes-only model
1) - Shawshank Redemption, The (1994) - Average Rating: 4.55 - Number of ratings: 2227
2) - Third Man, The (1949) - Average Rating: 4.45 - Number of ratings: 480
3) - City Lights (1931) - Average Rating: 4.39 - Number of ratings: 271
4) - Jean de Florette (1986) - Average Rating: 4.32 - Number of ratings: 216
5) - It Happened One Night (1934) - Average Rating: 4.28 - Number of ratings: 374
6) - Central Station (Central do Brasil) (1998) - Average Rating: 4.28 - Number of ratings: 215
7) - Best Years of Our Lives, The (1946) - Average Rating: 4.12 - Number of ratings: 236
8) - Man Who Would Be King, The (1975) - Average Rating: 4.13 - Number of ratings: 310
9) - In the Heat of the Night (1967) - Average Rating: 4.13 - Number of ratings: 348
10) - Double Indemnity (1944) - Average Rating: 4.42 - Number of ratings: 551
----------------
Recommended from hybrid model
1) - Babe (1995) - Average Rating: 3.89 - Number of ratings: 1751
2) - It's a Wonderful Life (1946) - Average Rating: 4.3 - Number of ratings: 729
3) - Beauty and the Beast (1991) - Average Rating: 3.89 - Number of ratings: 1060
4) - Singin' in the Rain (1952) - Average Rating: 4.28 - Number of ratings: 751
5) - Nosferatu (Nosferatu, eine Symphonie des Grauens) (1922) - Average Rating: 3.99 - Number of ratings: 238
6) - Bride of Frankenstein (1935) - Average Rating: 3.91 - Number of ratings: 216
7) - Gold Rush, The (1925) - Average Rating: 4.19 - Number of ratings: 275
8) - Invasion of the Body Snatchers (1956) - Average Rating: 3.91 - Number of ratings: 628
9) - Seven Samurai (The Magnificent Seven) (Shichinin no samurai) (1954) - Average Rating: 4.56 - Number of ratings: 628
10) - Green Mile, The (1999) - Average Rating: 4.15 - Number of ratings: 1222

(As can be seen, the personalized recommendations tend to recommend very old movies, which is what this user seems to rate highly, with no overlap with the non-personalized recommendations).

4. Tuning model parameters

The models here offer many tuneable parameters which can be tweaked in order to alter the recommended lists in some way. For example, setting a low regularization to the item biases will tend to favor movies with a high average rating regardless of the number of ratings, while setting a high regularization for the factorizing matrices will tend to produce the same recommendations for all users.

In [15]:
### Less personalized (underfitted)
reclist = \
    CMF(lambda_=[1e+3, 1e+1, 1e+2, 1e+2, 1e+2, 1e+2])\
        .fit(ratings)\
        .topN(user=948, n=10, exclude=exclude)
print_reclist(reclist)
1) - Seven Samurai (The Magnificent Seven) (Shichinin no samurai) (1954) - Average Rating: 4.56 - Number of ratings: 628
2) - Shawshank Redemption, The (1994) - Average Rating: 4.55 - Number of ratings: 2227
3) - Close Shave, A (1995) - Average Rating: 4.52 - Number of ratings: 657
4) - Wrong Trousers, The (1993) - Average Rating: 4.51 - Number of ratings: 882
5) - Sanjuro (1962) - Average Rating: 4.61 - Number of ratings: 69
6) - Third Man, The (1949) - Average Rating: 4.45 - Number of ratings: 480
7) - Double Indemnity (1944) - Average Rating: 4.42 - Number of ratings: 551
8) - Wallace & Gromit: The Best of Aardman Animation (1996) - Average Rating: 4.43 - Number of ratings: 438
9) - Citizen Kane (1941) - Average Rating: 4.39 - Number of ratings: 1116
10) - City Lights (1931) - Average Rating: 4.39 - Number of ratings: 271
In [16]:
### More personalized (overfitted)
reclist = \
    CMF(lambda_=[0., 1e+3, 1e-1, 1e-1, 1e-1, 1e-1])\
        .fit(ratings)\
        .topN(user=948, n=10, exclude=exclude)
print_reclist(reclist)
1) - Plan 9 from Outer Space (1958) - Average Rating: 2.63 - Number of ratings: 249
2) - Anne Frank Remembered (1995) - Average Rating: 4.1 - Number of ratings: 41
3) - Next Friday (1999) - Average Rating: 2.6 - Number of ratings: 168
4) - Muppet Christmas Carol, The (1992) - Average Rating: 3.61 - Number of ratings: 262
5) - Snow Day (2000) - Average Rating: 2.21 - Number of ratings: 122
6) - Black Mask (Hak hap) (1996) - Average Rating: 3.08 - Number of ratings: 66
7) - Foreign Student (1994) - Average Rating: 3.0 - Number of ratings: 2
8) - Ballad of Narayama, The (Narayama Bushiko) (1982) - Average Rating: 3.95 - Number of ratings: 19
9) - Around the World in 80 Days (1956) - Average Rating: 3.6 - Number of ratings: 269
10) - Faust (1994) - Average Rating: 3.48 - Number of ratings: 31

The collective model can also have variations such as weighting each factorization differently, or setting components (factors) that are not to be shared between factorizations (not shown).

In [17]:
### More oriented towards content-based than towards collaborative-filtering
reclist = \
    CMF(k=40, w_main=0.5, w_item=3., w_user=5., lambda_=1e+1)\
        .fit(ratings, U=user_side_info, I=item_sideinfo_pca)\
        .topN(user=948, n=10, exclude=exclude)
print_reclist(reclist)
1) - Wrong Trousers, The (1993) - Average Rating: 4.51 - Number of ratings: 882
2) - Seven Samurai (The Magnificent Seven) (Shichinin no samurai) (1954) - Average Rating: 4.56 - Number of ratings: 628
3) - It's a Wonderful Life (1946) - Average Rating: 4.3 - Number of ratings: 729
4) - Third Man, The (1949) - Average Rating: 4.45 - Number of ratings: 480
5) - Nosferatu (Nosferatu, eine Symphonie des Grauens) (1922) - Average Rating: 3.99 - Number of ratings: 238
6) - Close Shave, A (1995) - Average Rating: 4.52 - Number of ratings: 657
7) - Singin' in the Rain (1952) - Average Rating: 4.28 - Number of ratings: 751
8) - Shadow of a Doubt (1943) - Average Rating: 4.27 - Number of ratings: 233
9) - Citizen Kane (1941) - Average Rating: 4.39 - Number of ratings: 1116
10) - Christmas Carol, A (1938) - Average Rating: 3.99 - Number of ratings: 194

5. Recommendations for new users

Models can also be used to make recommendations for new users based on ratings and/or side information.

_(Be aware that, due to the nature of computer floating point aithmetic, there might be some slight discrepancies between the results from topN and topN_warm)_

In [18]:
print_reclist(model_with_sideinfo.topN_warm(X_col=ratings.ItemId.loc[ratings.UserId == 948],
                                            X_val=ratings.Rating.loc[ratings.UserId == 948],
                                            exclude=exclude))
1) - Babe (1995) - Average Rating: 3.89 - Number of ratings: 1751
2) - It's a Wonderful Life (1946) - Average Rating: 4.3 - Number of ratings: 729
3) - Beauty and the Beast (1991) - Average Rating: 3.89 - Number of ratings: 1060
4) - Singin' in the Rain (1952) - Average Rating: 4.28 - Number of ratings: 751
5) - Nosferatu (Nosferatu, eine Symphonie des Grauens) (1922) - Average Rating: 3.99 - Number of ratings: 238
6) - Bride of Frankenstein (1935) - Average Rating: 3.91 - Number of ratings: 216
7) - Gold Rush, The (1925) - Average Rating: 4.19 - Number of ratings: 275
8) - Invasion of the Body Snatchers (1956) - Average Rating: 3.91 - Number of ratings: 628
9) - Seven Samurai (The Magnificent Seven) (Shichinin no samurai) (1954) - Average Rating: 4.56 - Number of ratings: 628
10) - Green Mile, The (1999) - Average Rating: 4.15 - Number of ratings: 1222
In [19]:
print_reclist(model_with_sideinfo.topN_warm(X_col=ratings.ItemId.loc[ratings.UserId == 948],
                                            X_val=ratings.Rating.loc[ratings.UserId == 948],
                                            U=user_side_info.loc[user_side_info.UserId == 948],
                                            exclude=exclude))
1) - Babe (1995) - Average Rating: 3.89 - Number of ratings: 1751
2) - It's a Wonderful Life (1946) - Average Rating: 4.3 - Number of ratings: 729
3) - Beauty and the Beast (1991) - Average Rating: 3.89 - Number of ratings: 1060
4) - Singin' in the Rain (1952) - Average Rating: 4.28 - Number of ratings: 751
5) - Nosferatu (Nosferatu, eine Symphonie des Grauens) (1922) - Average Rating: 3.99 - Number of ratings: 238
6) - Bride of Frankenstein (1935) - Average Rating: 3.91 - Number of ratings: 216
7) - Gold Rush, The (1925) - Average Rating: 4.19 - Number of ratings: 275
8) - Invasion of the Body Snatchers (1956) - Average Rating: 3.91 - Number of ratings: 628
9) - Seven Samurai (The Magnificent Seven) (Shichinin no samurai) (1954) - Average Rating: 4.56 - Number of ratings: 628
10) - Green Mile, The (1999) - Average Rating: 4.15 - Number of ratings: 1222
In [20]:
print_reclist(model_with_sideinfo.topN_cold(U=user_side_info.loc[user_side_info.UserId == 948].drop("UserId", axis=1),
                                            exclude=exclude))
1) - Shawshank Redemption, The (1994) - Average Rating: 4.55 - Number of ratings: 2227
2) - Seven Samurai (The Magnificent Seven) (Shichinin no samurai) (1954) - Average Rating: 4.56 - Number of ratings: 628
3) - Wrong Trousers, The (1993) - Average Rating: 4.51 - Number of ratings: 882
4) - Close Shave, A (1995) - Average Rating: 4.52 - Number of ratings: 657
5) - Sanjuro (1962) - Average Rating: 4.61 - Number of ratings: 69
6) - Wallace & Gromit: The Best of Aardman Animation (1996) - Average Rating: 4.43 - Number of ratings: 438
7) - Double Indemnity (1944) - Average Rating: 4.42 - Number of ratings: 551
8) - Third Man, The (1949) - Average Rating: 4.45 - Number of ratings: 480
9) - Life Is Beautiful (La Vita � bella) (1997) - Average Rating: 4.33 - Number of ratings: 1152
10) - Grand Day Out, A (1992) - Average Rating: 4.36 - Number of ratings: 473

This last one is very similar to the non-personalized recommended list - that is, the user side information had very little leverage in the model, at least for that user - in this regard, the content-based model tends to be better at cold-start recommendations:

In [21]:
print_reclist(model_content_based.topN_cold(U=user_side_info.loc[user_side_info.UserId == 948].drop("UserId", axis=1),
                                            exclude=exclude_cb))
1) - Shawshank Redemption, The (1994) - Average Rating: 4.55 - Number of ratings: 2227
2) - Third Man, The (1949) - Average Rating: 4.45 - Number of ratings: 480
3) - City Lights (1931) - Average Rating: 4.39 - Number of ratings: 271
4) - Jean de Florette (1986) - Average Rating: 4.32 - Number of ratings: 216
5) - It Happened One Night (1934) - Average Rating: 4.28 - Number of ratings: 374
6) - Central Station (Central do Brasil) (1998) - Average Rating: 4.28 - Number of ratings: 215
7) - Best Years of Our Lives, The (1946) - Average Rating: 4.12 - Number of ratings: 236
8) - Man Who Would Be King, The (1975) - Average Rating: 4.13 - Number of ratings: 310
9) - In the Heat of the Night (1967) - Average Rating: 4.13 - Number of ratings: 348
10) - Double Indemnity (1944) - Average Rating: 4.42 - Number of ratings: 551

(For this use-case, would also be better to add item biases to the content-based model though)

6. Evaluating models

This section shows usage of the predict family of functions for getting the predicted rating for a given user and item, in order to calculate evaluation metrics such as RMSE and tune model parameters.

Note that, while widely used in earlier literature, RMSE might not provide a good overview of the ranking of items (which is what matters for recommendations), and it's recommended to also evaluate other metrics such as [email protected], [email protected], correlations, etc.

Also be aware that there is a different class CMF_implicit which might perform better at implicit-feedback metrics such as [email protected]

When making recommendations, there's quite a difference between making predictions based on ratings data or based on side information alone. In this regard, one can classify prediction types into 4 types:

  1. Predictions for users and items which were both in the training data.
  2. Predictions for users which were in the training data and items which were not in the training data.
  3. Predictions for users which were not in the training data and items which were in the training data.
  4. Predictions for users and items, of which neither were in the training data.

(One could sub-divide further according to users/items which were present in the training data with only ratings or with only side information, but this notebook will not go into that level of detail)

The classic model is only able to make predictions for the first case, while the collective model can leverage the side information in order to make predictions for (2) and (3). In theory, it could also do (4), but this is not recommended and the API does not provide such functionality.

The content-based model, on the other hand, is an ideal approach for case (4). The package also provides a different model (the "offsets" model - see references at the bottom) aimed at improving cases (2) and (3) when there is side information about only user or only about items at the expense of case (1), but such models are not shown in this notebook.


Producing a training and test set split of the ratings and side information:

In [22]:
from sklearn.model_selection import train_test_split

users_train, users_test = train_test_split(ratings.UserId.unique(), test_size=0.2, random_state=1)
items_train, items_test = train_test_split(ratings.ItemId.unique(), test_size=0.2, random_state=2)

ratings_train, ratings_test1 = train_test_split(ratings.loc[ratings.UserId.isin(users_train) &
                                                            ratings.ItemId.isin(items_train)],
                                                test_size=0.2, random_state=123)
users_train = ratings_train.UserId.unique()
items_train = ratings_train.ItemId.unique()
ratings_test1 = ratings_test1.loc[ratings_test1.UserId.isin(users_train) &
                                  ratings_test1.ItemId.isin(items_train)]

user_attr_train = user_side_info.loc[user_side_info.UserId.isin(users_train)]
item_attr_train = item_sideinfo_pca.loc[item_sideinfo_pca.ItemId.isin(items_train)]

ratings_test2 = ratings.loc[ratings.UserId.isin(users_train) &
                            ~ratings.ItemId.isin(items_train) &
                            ratings.ItemId.isin(item_sideinfo_pca.ItemId)]
ratings_test3 = ratings.loc[~ratings.UserId.isin(users_train) &
                            ratings.ItemId.isin(items_train) &
                            ratings.UserId.isin(user_side_info.UserId) &
                            ratings.ItemId.isin(item_sideinfo_pca.ItemId)]
ratings_test4 = ratings.loc[~ratings.UserId.isin(users_train) &
                            ~ratings.ItemId.isin(items_train) &
                            ratings.UserId.isin(user_side_info.UserId) &
                            ratings.ItemId.isin(item_sideinfo_pca.ItemId)]


print("Number of ratings in training data: %d" % ratings_train.shape[0])
print("Number of ratings in test data type (1): %d" % ratings_test1.shape[0])
print("Number of ratings in test data type (2): %d" % ratings_test2.shape[0])
print("Number of ratings in test data type (3): %d" % ratings_test3.shape[0])
print("Number of ratings in test data type (4): %d" % ratings_test4.shape[0])
Number of ratings in training data: 512972
Number of ratings in test data type (1): 128221
Number of ratings in test data type (2): 153128
Number of ratings in test data type (3): 138904
Number of ratings in test data type (4): 36450
In [23]:
### Handy usage of Pandas indexing
user_attr_test = user_side_info.set_index("UserId")
item_attr_test = item_sideinfo_pca.set_index("ItemId")

Re-fitting earlier models to the training subset of the earlier data:

In [24]:
m_classic = CMF(k=40)\
                .fit(ratings_train)
m_collective = CMF(k=40, w_main=0.5, w_user=0.5, w_item=0.5)\
                .fit(X=ratings_train,
                     U=user_attr_train,
                     I=item_attr_train)
m_contentbased = ContentBased(k=40, user_bias=False, item_bias=False)\
                .fit(X=ratings_train.loc[ratings_train.UserId.isin(user_attr_train.UserId) &
                                         ratings_train.ItemId.isin(item_attr_train.ItemId)],
                     U=user_attr_train,
                     I=item_attr_train)
m_mostpopular = MostPopular(user_bias=True)\
                .fit(X=ratings_train)

RMSE for users and items which were both in the training data:

In [25]:
from sklearn.metrics import mean_squared_error

pred_contetbased = m_mostpopular.predict(ratings_test1.UserId, ratings_test1.ItemId)
print("RMSE type 1 non-personalized model: %.3f [rho: %.3f]" %
      (np.sqrt(mean_squared_error(ratings_test1.Rating,
                                  pred_contetbased,
                                  squared=True)),
      np.corrcoef(ratings_test1.Rating, pred_contetbased)[0,1]))

pred_ratingsonly = m_classic.predict(ratings_test1.UserId, ratings_test1.ItemId)
print("RMSE type 1 ratings-only model: %.3f [rho: %.3f]" %
      (np.sqrt(mean_squared_error(ratings_test1.Rating,
                                  pred_ratingsonly,
                                  squared=True)),
       np.corrcoef(ratings_test1.Rating, pred_ratingsonly)[0,1]))

pred_hybrid = m_collective.predict(ratings_test1.UserId, ratings_test1.ItemId)
print("RMSE type 1 hybrid model: %.3f [rho: %.3f]" %
      (np.sqrt(mean_squared_error(ratings_test1.Rating,
                                  pred_hybrid,
                                  squared=True)),
       np.corrcoef(ratings_test1.Rating, pred_hybrid)[0,1]))

test_cb = ratings_test1.loc[ratings_test1.UserId.isin(user_attr_train.UserId) &
                            ratings_test1.ItemId.isin(item_attr_train.ItemId)]
pred_contentbased = m_contentbased.predict(test_cb.UserId, test_cb.ItemId)
print("RMSE type 1 content-based model: %.3f [rho: %.3f]" %
      (np.sqrt(mean_squared_error(test_cb.Rating,
                                  pred_contentbased,
                                  squared=True)),
       np.corrcoef(test_cb.Rating, pred_contentbased)[0,1]))
RMSE type 1 non-personalized model: 0.911 [rho: 0.580]
RMSE type 1 ratings-only model: 0.897 [rho: 0.603]
RMSE type 1 hybrid model: 0.860 [rho: 0.641]
RMSE type 1 content-based model: 0.975 [rho: 0.486]

RMSE for users which were in the training data but items which were not:

In [26]:
pred_hybrid = m_collective.predict_new(ratings_test2.UserId,
                                       item_attr_test.loc[ratings_test2.ItemId])
print("RMSE type 2 hybrid model: %.3f [rho: %.3f]" %
      (np.sqrt(mean_squared_error(ratings_test2.Rating,
                                  pred_hybrid,
                                  squared=True)),
       np.corrcoef(ratings_test2.Rating, pred_hybrid)[0,1]))

pred_contentbased = m_contentbased.predict_new(user_attr_test.loc[ratings_test2.UserId],
                                               item_attr_test.loc[ratings_test2.ItemId])
print("RMSE type 2 content-based model: %.3f [rho: %.3f]" %
      (np.sqrt(mean_squared_error(ratings_test2.Rating,
                                  pred_contentbased,
                                  squared=True)),
       np.corrcoef(ratings_test2.Rating, pred_contentbased)[0,1]))
RMSE type 2 hybrid model: 1.023 [rho: 0.424]
RMSE type 2 content-based model: 0.977 [rho: 0.484]

RMSE for items which were in the training data but users which were not:

In [27]:
pred_hybrid = m_collective.predict_cold_multiple(item=ratings_test3.ItemId,
                                                 U=user_attr_test.loc[ratings_test3.UserId])
print("RMSE type 3 hybrid model: %.3f  [rho: %.3f]" %
      (np.sqrt(mean_squared_error(ratings_test3.Rating,
                                  pred_hybrid,
                                  squared=True)),
       np.corrcoef(ratings_test3.Rating, pred_hybrid)[0,1]))

pred_contentbased = m_contentbased.predict_new(user_attr_test.loc[ratings_test3.UserId],
                                               item_attr_test.loc[ratings_test3.ItemId])
print("RMSE type 3 content-based model: %.3f [rho: %.3f]" %
      (np.sqrt(mean_squared_error(ratings_test3.Rating,
                                  pred_contentbased,
                                  squared=True)),
       np.corrcoef(ratings_test3.Rating, pred_contentbased)[0,1]))
RMSE type 3 hybrid model: 0.988  [rho: 0.470]
RMSE type 3 content-based model: 0.981 [rho: 0.468]

RMSE for users and items which were not in the training data:

In [28]:
pred_contentbased = m_contentbased.predict_new(user_attr_test.loc[ratings_test4.UserId],
                                               item_attr_test.loc[ratings_test4.ItemId])
print("RMSE type 4 content-based model: %.3f [rho: %.3f]" %
      (np.sqrt(mean_squared_error(ratings_test4.Rating,
                                  pred_contentbased,
                                  squared=True)),
      np.corrcoef(ratings_test4.Rating, pred_contentbased)[0,1]))
RMSE type 4 content-based model: 0.986 [rho: 0.462]

7. Adding implicit features and dynamic regularization

In addition to external side information about the users and items, one can also generate features from the same $\mathbf{X}$ data by considering which movies a user rated and which ones didn't - these are taken as binary features, with the zeros being counted towards the loss/objective function.

The package offers an easy option for automatically generating these features on-the-fly, which can then be used in addition to the external features. The full model now becomes: $$ \mathbf{X} \approx \mathbf{A} \mathbf{B}^T + \mu + \mathbf{b}_A + \mathbf{b}_B $$ $$ \mathbf{I}_x \approx \mathbf{A} \mathbf{B}_i^T, \:\: \mathbf{I}_x^T \approx \mathbf{B} \mathbf{A}_i^T $$ $$ \mathbf{U} \approx \mathbf{A} \mathbf{C}^T + \mathbf{\mu}_U ,\:\:\:\: \mathbf{I} \approx \mathbf{B} \mathbf{D}^T + \mathbf{\mu}_I $$

Where:

  • $\mathbf{I}_x$ is a binary matrix having a 1 at position ${i,j}$ if $x_{ij}$ is not missing, and a zero otherwise.
  • $\mathbf{A}_i$ and $\mathbf{B}_i$ are the implicit feature matrices.

While in the earlier models, every user/item had the same regularization applied on its factors, it's also possible to make this regularization adjust itself according to the number of ratings for each user movie, which tends to produce better models at the expense of more hyperparameter tuning.

As well, the package offers an ALS-Cholesky solver, which is slower but tends to give better end results. This section will now use the implicit features and the Cholesky solver, and compare the new models to the previous ones.

In [29]:
m_implicit = CMF(k=40, add_implicit_features=True,
                 lambda_=0.05, scale_lam=True,
                 w_main=0.7, w_implicit=1., use_cg=False)\
            .fit(X=ratings_train)
m_implicit_plus_collective = \
        CMF(k=40, add_implicit_features=True, use_cg=False,
            lambda_=0.03, scale_lam=True,
            w_main=0.5, w_user=0.3, w_item=0.3, w_implicit=1.)\
            .fit(X=ratings_train,
                 U=user_attr_train,
                 I=item_attr_train)

pred_ratingsonly = m_classic.predict(ratings_test1.UserId, ratings_test1.ItemId)
print("RMSE type 1 ratings-only model: %.3f [rho: %.3f]" %
      (np.sqrt(mean_squared_error(ratings_test1.Rating,
                                  pred_ratingsonly,
                                  squared=True)),
       np.corrcoef(ratings_test1.Rating, pred_ratingsonly)[0,1]))

pred_implicit = m_implicit.predict(ratings_test1.UserId, ratings_test1.ItemId)
print("RMSE type 1 ratings + implicit + dyn + Chol: %.3f [rho: %.3f]" %
      (np.sqrt(mean_squared_error(ratings_test1.Rating,
                                  pred_implicit,
                                  squared=True)),
       np.corrcoef(ratings_test1.Rating, pred_implicit)[0,1]))

pred_hybrid = m_collective.predict(ratings_test1.UserId, ratings_test1.ItemId)
print("RMSE type 1 hybrid model: %.3f [rho: %.3f]" %
      (np.sqrt(mean_squared_error(ratings_test1.Rating,
                                  pred_hybrid,
                                  squared=True)),
       np.corrcoef(ratings_test1.Rating, pred_hybrid)[0,1]))


pred_implicit_plus_collective = m_implicit_plus_collective.\
                                predict(ratings_test1.UserId, ratings_test1.ItemId)
print("RMSE type 1 hybrid + implicit + dyn + Chol: %.3f [rho: %.3f]" %
      (np.sqrt(mean_squared_error(ratings_test1.Rating,
                                  pred_implicit_plus_collective,
                                  squared=True)),
       np.corrcoef(ratings_test1.Rating, pred_implicit_plus_collective)[0,1]))
RMSE type 1 ratings-only model: 0.897 [rho: 0.603]
RMSE type 1 ratings + implicit + dyn + Chol: 0.853 [rho: 0.647]
RMSE type 1 hybrid model: 0.860 [rho: 0.641]
RMSE type 1 hybrid + implicit + dyn + Chol: 0.847 [rho: 0.653]

But note that, while the dynamic regularization and Cholesky method usually lead to improvements in general, the newly-added implicit features oftentimes result in worse cold-start predictions:

In [30]:
pred_hybrid = m_collective.predict_new(ratings_test2.UserId,
                                       item_attr_test.loc[ratings_test2.ItemId])
print("RMSE type 2 hybrid model: %.3f [rho: %.3f]" %
      (np.sqrt(mean_squared_error(ratings_test2.Rating,
                                  pred_hybrid,
                                  squared=True)),
       np.corrcoef(ratings_test2.Rating, pred_hybrid)[0,1]))

pred_implicit_plus_collective = \
                m_implicit_plus_collective\
                    .predict_new(ratings_test2.UserId,
                                 item_attr_test.loc[ratings_test2.ItemId])
print("RMSE type 2 hybrid model + implicit + dyn + Chol: %.3f [rho: %.3f] (might get worse)" %
      (np.sqrt(mean_squared_error(ratings_test2.Rating,
                                  pred_implicit_plus_collective,
                                  squared=True)),
       np.corrcoef(ratings_test2.Rating, pred_implicit_plus_collective)[0,1]))

pred_contentbased = m_contentbased.predict_new(user_attr_test.loc[ratings_test2.UserId],
                                               item_attr_test.loc[ratings_test2.ItemId])
print("RMSE type 2 content-based model: %.3f [rho: %.3f]" %
      (np.sqrt(mean_squared_error(ratings_test2.Rating,
                                  pred_contentbased,
                                  squared=True)),
       np.corrcoef(ratings_test2.Rating, pred_contentbased)[0,1]))
RMSE type 2 hybrid model: 1.023 [rho: 0.424]
RMSE type 2 hybrid model + implicit + dyn + Chol: 0.999 [rho: 0.490] (might get worse)
RMSE type 2 content-based model: 0.977 [rho: 0.484]
In [31]:
pred_hybrid = m_collective.predict_cold_multiple(item=ratings_test3.ItemId,
                                                 U=user_attr_test.loc[ratings_test3.UserId])
print("RMSE type 3 hybrid model: %.3f  [rho: %.3f]" %
      (np.sqrt(mean_squared_error(ratings_test3.Rating,
                                  pred_hybrid,
                                  squared=True)),
       np.corrcoef(ratings_test3.Rating, pred_hybrid)[0,1]))


pred_implicit_plus_collective = \
    m_implicit_plus_collective\
    .predict_cold_multiple(item=ratings_test3.ItemId,
                           U=user_attr_test.loc[ratings_test3.UserId])
print("RMSE type 3 hybrid model + implicit + dyn + Chol: %.3f  [rho: %.3f] (got worse)" %
      (np.sqrt(mean_squared_error(ratings_test3.Rating,
                                  pred_implicit_plus_collective,
                                  squared=True)),
       np.corrcoef(ratings_test3.Rating, pred_implicit_plus_collective)[0,1]))

pred_contentbased = m_contentbased.predict_new(user_attr_test.loc[ratings_test3.UserId],
                                               item_attr_test.loc[ratings_test3.ItemId])
print("RMSE type 3 content-based model: %.3f [rho: %.3f]" %
      (np.sqrt(mean_squared_error(ratings_test3.Rating,
                                  pred_contentbased,
                                  squared=True)),
       np.corrcoef(ratings_test3.Rating, pred_contentbased)[0,1]))
RMSE type 3 hybrid model: 0.988  [rho: 0.470]
RMSE type 3 hybrid model + implicit + dyn + Chol: 1.014  [rho: 0.457] (got worse)
RMSE type 3 content-based model: 0.981 [rho: 0.468]

8. References

  • Cortes, David. "Cold-start recommendations in Collective Matrix Factorization." arXiv preprint arXiv:1809.00366 (2018).
  • Singh, Ajit P., and Geoffrey J. Gordon. "Relational learning via collective matrix factorization." Proceedings of the 14th ACM SIGKDD international conference on Knowledge discovery and data mining. ACM, 2008.
  • Takacs, Gabor, Istvan Pilaszy, and Domonkos Tikk. "Applications of the conjugate gradient method for implicit feedback collaborative filtering." Proceedings of the fifth ACM conference on Recommender systems. 2011.
  • Rendle, Steffen, Li Zhang, and Yehuda Koren. "On the difficulty of evaluating baselines: A study on recommender systems." arXiv preprint arXiv:1905.01395 (2019).
  • Zhou, Yunhong, et al. "Large-scale parallel collaborative filtering for the netflix prize." International conference on algorithmic applications in management. Springer, Berlin, Heidelberg, 2008.