This IPython notebook illustrates the usage of the cmfrec Python package for building recommender systems through different matrix factorization models with or without using information about user and item attributes – for more details see the references at the bottom.

The example uses the MovieLens-1M data which consists of ratings from users about movies + user demographic information, plus the movie tag genome. Note however that, for implicit-feedback datasets (e.g. item purchases), it's recommended to use different models than the ones shown here (see documentation for details about models in the package aimed at implicit-feedback data).

**Small note: if the TOC here is not clickable or the math symbols don't show properly, try visualizing this same notebook from nbviewer following this link.**

3. Examining top-N recommended lists

5. Recommendations for new users

7. Adding implicit features and dynamic regularization

In [1]:

```
import numpy as np, pandas as pd, pickle
ratings = pickle.load(open("ratings.p", "rb"))
item_sideinfo_pca = pickle.load(open("item_sideinfo_pca.p", "rb"))
user_side_info = pickle.load(open("user_side_info.p", "rb"))
movie_id_to_title = pickle.load(open("movie_id_to_title.p", "rb"))
```

In [2]:

```
ratings.head()
```

Out[2]:

In [3]:

```
item_sideinfo_pca.head()
```

Out[3]:

In [4]:

```
user_side_info.head()
```

Out[4]:

This section fits different recommendation models and then compares the recommendations produced by them.

Usual low-rank matrix factorization model with no user/item attributes: $$ \mathbf{X} \approx \mathbf{A} \mathbf{B}^T + \mu + \mathbf{b}_A + \mathbf{b}_B $$ Where

- $\mathbf{X}$ is the ratings matrix, in which users are rows, items are columns, and the entries denote the ratings.
- $\mathbf{A}$ is the user-factors matrix.
- $\mathbf{B}$ is the item-factors matrix.
- $\mu$ is the average rating.
- $\mathbf{b}_A$ are user-specific biases (row vector).
- $\mathbf{b}_B$ are item-specific biases (column vector).

(For more details see references at the bottom)

In [5]:

```
%%time
from cmfrec import CMF
model_no_sideinfo = CMF(method="als", k=40, lambda_=1e+1)
model_no_sideinfo.fit(ratings)
```

Out[5]:

The collective matrix factorization model extends the earlier model by making the user and item factor matrices also be able to make low-rank approximate factorizations of the user and item attributes: $$ \mathbf{X} \approx \mathbf{A} \mathbf{B}^T + \mu + \mathbf{b}_A + \mathbf{b}_B ,\:\:\:\: \mathbf{U} \approx \mathbf{A} \mathbf{C}^T + \mathbf{\mu}_U ,\:\:\:\: \mathbf{I} \approx \mathbf{B} \mathbf{D}^T + \mathbf{\mu}_I $$

Where

- $\mathbf{U}$ is the user attributes matrix, in which users are rows and attributes are columns.
- $\mathbf{I}$ is the item attributes matrix, in which items are rows and attributes are columns.
- $\mathbf{\mu}_U$ are the column means for the user attributes (column vector).
- $\mathbf{\mu}_I$ are the columns means for the item attributes (column vector).
- $\mathbf{C}$ and $\mathbf{D}$ are attribute-factor matrices (also model parameters).

**In addition**, this package can also apply sigmoid transformations on the attribute columns which are binary. Note that this requires a different optimization approach which is slower than the ALS (alternating least-squares) method used here.

In [6]:

```
%%time
model_with_sideinfo = CMF(method="als", k=40, lambda_=1e+1, w_main=0.5, w_user=0.25, w_item=0.25)
model_with_sideinfo.fit(X=ratings, U=user_side_info, I=item_sideinfo_pca)
### for the sigmoid transformations:
# model_with_sideinfo = CMF(method="lbfgs", maxiter=0, k=40, lambda_=1e+1, w_main=0.5, w_user=0.25, w_item=0.25)
# model_with_sideinfo.fit(X=ratings, U_bin=user_side_info, I=item_sideinfo_pca)
```

Out[6]:

*(Note that, since the side info has variables in a different scale, even though the weights sum up to 1, it's still not the same as the earlier model w.r.t. the regularization parameter - this type of model requires more hyperparameter tuning too.)*

This is a model in which the factorizing matrices are constrained to be linear combinations of the user and item attributes, thereby making the recommendations based entirely on side information, with no free parameters for specific users or items: $$ \mathbf{X} \approx (\mathbf{U} \mathbf{C}) (\mathbf{I} \mathbf{D})^T + \mu $$

*(Note that the movie attributes are not available for all the movies with ratings)*

In [7]:

```
%%time
from cmfrec import ContentBased
model_content_based = ContentBased(k=40, maxiter=0, user_bias=False, item_bias=False)
model_content_based.fit(X=ratings.loc[ratings.ItemId.isin(item_sideinfo_pca.ItemId)],
U=user_side_info,
I=item_sideinfo_pca.loc[item_sideinfo_pca.ItemId.isin(ratings.ItemId)])
```

Out[7]:

This is an intercepts-only version of the classical model, which estimates one parameter per user and one parameter per item, and as such produces a simple rank of the items based on those parameters. It is intended for comparison purposes and can be helpful to check that the recommendations for different users are having some variability (e.g. setting too large regularization values will tend to make all personalzied recommended lists similar to each other).

In [8]:

```
%%time
from cmfrec import MostPopular
model_non_personalized = MostPopular(user_bias=True, implicit=False)
model_non_personalized.fit(ratings)
```

Out[8]:

This section will examine what would each model recommend to the user with ID 948.

This is the demographic information for the user:

In [9]:

```
user_side_info.loc[user_side_info.UserId == 948].T.where(lambda x: x > 0).dropna()
```

Out[9]:

These are the highest-rated movies from the user:

In [10]:

```
ratings\
.loc[ratings.UserId == 948]\
.sort_values("Rating", ascending=False)\
.assign(Movie=lambda x: x.ItemId.map(movie_id_to_title))\
.head(10)
```

Out[10]:

These are the lowest-rated movies from the user:

In [11]:

```
ratings\
.loc[ratings.UserId == 948]\
.sort_values("Rating", ascending=True)\
.assign(Movie=lambda x: x.ItemId.map(movie_id_to_title))\
.head(10)
```

Out[11]:

Now producing recommendations from each model:

In [12]:

```
### Will exclude already-seen movies
exclude = ratings.ItemId.loc[ratings.UserId == 948]
exclude_cb = exclude.loc[exclude.isin(item_sideinfo_pca.ItemId)]
### Recommended lists with those excluded
recommended_non_personalized = model_non_personalized.topN(user=948, n=10, exclude=exclude)
recommended_no_side_info = model_no_sideinfo.topN(user=948, n=10, exclude=exclude)
recommended_with_side_info = model_with_sideinfo.topN(user=948, n=10, exclude=exclude)
recommended_content_based = model_content_based.topN(user=948, n=10, exclude=exclude_cb)
```

In [13]:

```
recommended_non_personalized
```

Out[13]:

A handy function to print top-N recommended lists with associated information:

In [14]:

```
from collections import defaultdict
# aggregate statistics
avg_movie_rating = defaultdict(lambda: 0)
num_ratings_per_movie = defaultdict(lambda: 0)
for i in ratings.groupby('ItemId')['Rating'].mean().to_frame().itertuples():
avg_movie_rating[i.Index] = i.Rating
for i in ratings.groupby('ItemId')['Rating'].agg(lambda x: len(tuple(x))).to_frame().itertuples():
num_ratings_per_movie[i.Index] = i.Rating
# function to print recommended lists more nicely
def print_reclist(reclist):
list_w_info = [str(m + 1) + ") - " + movie_id_to_title[reclist[m]] +\
" - Average Rating: " + str(np.round(avg_movie_rating[reclist[m]], 2))+\
" - Number of ratings: " + str(num_ratings_per_movie[reclist[m]])\
for m in range(len(reclist))]
print("\n".join(list_w_info))
print("Recommended from non-personalized model")
print_reclist(recommended_non_personalized)
print("----------------")
print("Recommended from ratings-only model")
print_reclist(recommended_no_side_info)
print("----------------")
print("Recommended from attributes-only model")
print_reclist(recommended_content_based)
print("----------------")
print("Recommended from hybrid model")
print_reclist(recommended_with_side_info)
```

(As can be seen, the personalized recommendations tend to recommend very old movies, which is what this user seems to rate highly, with no overlap with the non-personalized recommendations).

The models here offer many tuneable parameters which can be tweaked in order to alter the recommended lists in some way. For example, setting a low regularization to the item biases will tend to favor movies with a high average rating regardless of the number of ratings, while setting a high regularization for the factorizing matrices will tend to produce the same recommendations for all users.

In [15]:

```
### Less personalized (underfitted)
reclist = \
CMF(lambda_=[1e+3, 1e+1, 1e+2, 1e+2, 1e+2, 1e+2])\
.fit(ratings)\
.topN(user=948, n=10, exclude=exclude)
print_reclist(reclist)
```

In [16]:

```
### More personalized (overfitted)
reclist = \
CMF(lambda_=[0., 1e+3, 1e-1, 1e-1, 1e-1, 1e-1])\
.fit(ratings)\
.topN(user=948, n=10, exclude=exclude)
print_reclist(reclist)
```

The collective model can also have variations such as weighting each factorization differently, or setting components (factors) that are not to be shared between factorizations (not shown).

In [17]:

```
### More oriented towards content-based than towards collaborative-filtering
reclist = \
CMF(k=40, w_main=0.5, w_item=3., w_user=5., lambda_=1e+1)\
.fit(ratings, U=user_side_info, I=item_sideinfo_pca)\
.topN(user=948, n=10, exclude=exclude)
print_reclist(reclist)
```

Models can also be used to make recommendations for new users based on ratings and/or side information.

_(Be aware that, due to the nature of computer floating point aithmetic, there might be some slight discrepancies between the results from `topN`

and `topN_warm`

)_

In [18]:

```
print_reclist(model_with_sideinfo.topN_warm(X_col=ratings.ItemId.loc[ratings.UserId == 948],
X_val=ratings.Rating.loc[ratings.UserId == 948],
exclude=exclude))
```

In [19]:

```
print_reclist(model_with_sideinfo.topN_warm(X_col=ratings.ItemId.loc[ratings.UserId == 948],
X_val=ratings.Rating.loc[ratings.UserId == 948],
U=user_side_info.loc[user_side_info.UserId == 948],
exclude=exclude))
```

In [20]:

```
print_reclist(model_with_sideinfo.topN_cold(U=user_side_info.loc[user_side_info.UserId == 948].drop("UserId", axis=1),
exclude=exclude))
```

This last one is very similar to the non-personalized recommended list - that is, the user side information had very little leverage in the model, at least for that user - in this regard, the content-based model tends to be better at cold-start recommendations:

In [21]:

```
print_reclist(model_content_based.topN_cold(U=user_side_info.loc[user_side_info.UserId == 948].drop("UserId", axis=1),
exclude=exclude_cb))
```

*(For this use-case, would also be better to add item biases to the content-based model though)*

This section shows usage of the `predict`

family of functions for getting the predicted rating for a given user and item, in order to calculate evaluation metrics such as RMSE and tune model parameters.

**Note that, while widely used in earlier literature, RMSE might not provide a good overview of the ranking of items (which is what matters for recommendations), and it's recommended to also evaluate other metrics such as [email protected], [email protected], correlations, etc.**

**Also be aware that there is a different class CMF_implicit which might perform better at implicit-feedback metrics such as [email protected]**

When making recommendations, there's quite a difference between making predictions based on ratings data or based on side information alone. In this regard, one can classify prediction types into 4 types:

- Predictions for users and items which were both in the training data.
- Predictions for users which were in the training data and items which were not in the training data.
- Predictions for users which were not in the training data and items which were in the training data.
- Predictions for users and items, of which neither were in the training data.

(One could sub-divide further according to users/items which were present in the training data with only ratings or with only side information, but this notebook will not go into that level of detail)

The classic model is only able to make predictions for the first case, while the collective model can leverage the side information in order to make predictions for (2) and (3). In theory, it could also do (4), but this is not recommended and the API does not provide such functionality.

The content-based model, on the other hand, is an ideal approach for case (4). The package also provides a different model (the "offsets" model - see references at the bottom) aimed at improving cases (2) and (3) when there is side information about only user or only about items at the expense of case (1), but such models are not shown in this notebook.

Producing a training and test set split of the ratings and side information:

In [22]:

```
from sklearn.model_selection import train_test_split
users_train, users_test = train_test_split(ratings.UserId.unique(), test_size=0.2, random_state=1)
items_train, items_test = train_test_split(ratings.ItemId.unique(), test_size=0.2, random_state=2)
ratings_train, ratings_test1 = train_test_split(ratings.loc[ratings.UserId.isin(users_train) &
ratings.ItemId.isin(items_train)],
test_size=0.2, random_state=123)
users_train = ratings_train.UserId.unique()
items_train = ratings_train.ItemId.unique()
ratings_test1 = ratings_test1.loc[ratings_test1.UserId.isin(users_train) &
ratings_test1.ItemId.isin(items_train)]
user_attr_train = user_side_info.loc[user_side_info.UserId.isin(users_train)]
item_attr_train = item_sideinfo_pca.loc[item_sideinfo_pca.ItemId.isin(items_train)]
ratings_test2 = ratings.loc[ratings.UserId.isin(users_train) &
~ratings.ItemId.isin(items_train) &
ratings.ItemId.isin(item_sideinfo_pca.ItemId)]
ratings_test3 = ratings.loc[~ratings.UserId.isin(users_train) &
ratings.ItemId.isin(items_train) &
ratings.UserId.isin(user_side_info.UserId) &
ratings.ItemId.isin(item_sideinfo_pca.ItemId)]
ratings_test4 = ratings.loc[~ratings.UserId.isin(users_train) &
~ratings.ItemId.isin(items_train) &
ratings.UserId.isin(user_side_info.UserId) &
ratings.ItemId.isin(item_sideinfo_pca.ItemId)]
print("Number of ratings in training data: %d" % ratings_train.shape[0])
print("Number of ratings in test data type (1): %d" % ratings_test1.shape[0])
print("Number of ratings in test data type (2): %d" % ratings_test2.shape[0])
print("Number of ratings in test data type (3): %d" % ratings_test3.shape[0])
print("Number of ratings in test data type (4): %d" % ratings_test4.shape[0])
```

In [23]:

```
### Handy usage of Pandas indexing
user_attr_test = user_side_info.set_index("UserId")
item_attr_test = item_sideinfo_pca.set_index("ItemId")
```

Re-fitting earlier models to the training subset of the earlier data:

In [24]:

```
m_classic = CMF(k=40)\
.fit(ratings_train)
m_collective = CMF(k=40, w_main=0.5, w_user=0.5, w_item=0.5)\
.fit(X=ratings_train,
U=user_attr_train,
I=item_attr_train)
m_contentbased = ContentBased(k=40, user_bias=False, item_bias=False)\
.fit(X=ratings_train.loc[ratings_train.UserId.isin(user_attr_train.UserId) &
ratings_train.ItemId.isin(item_attr_train.ItemId)],
U=user_attr_train,
I=item_attr_train)
m_mostpopular = MostPopular(user_bias=True)\
.fit(X=ratings_train)
```

RMSE for users and items which were both in the training data:

In [25]:

```
from sklearn.metrics import mean_squared_error
pred_contetbased = m_mostpopular.predict(ratings_test1.UserId, ratings_test1.ItemId)
print("RMSE type 1 non-personalized model: %.3f [rho: %.3f]" %
(np.sqrt(mean_squared_error(ratings_test1.Rating,
pred_contetbased,
squared=True)),
np.corrcoef(ratings_test1.Rating, pred_contetbased)[0,1]))
pred_ratingsonly = m_classic.predict(ratings_test1.UserId, ratings_test1.ItemId)
print("RMSE type 1 ratings-only model: %.3f [rho: %.3f]" %
(np.sqrt(mean_squared_error(ratings_test1.Rating,
pred_ratingsonly,
squared=True)),
np.corrcoef(ratings_test1.Rating, pred_ratingsonly)[0,1]))
pred_hybrid = m_collective.predict(ratings_test1.UserId, ratings_test1.ItemId)
print("RMSE type 1 hybrid model: %.3f [rho: %.3f]" %
(np.sqrt(mean_squared_error(ratings_test1.Rating,
pred_hybrid,
squared=True)),
np.corrcoef(ratings_test1.Rating, pred_hybrid)[0,1]))
test_cb = ratings_test1.loc[ratings_test1.UserId.isin(user_attr_train.UserId) &
ratings_test1.ItemId.isin(item_attr_train.ItemId)]
pred_contentbased = m_contentbased.predict(test_cb.UserId, test_cb.ItemId)
print("RMSE type 1 content-based model: %.3f [rho: %.3f]" %
(np.sqrt(mean_squared_error(test_cb.Rating,
pred_contentbased,
squared=True)),
np.corrcoef(test_cb.Rating, pred_contentbased)[0,1]))
```

RMSE for users which were in the training data but items which were not:

In [26]:

```
pred_hybrid = m_collective.predict_new(ratings_test2.UserId,
item_attr_test.loc[ratings_test2.ItemId])
print("RMSE type 2 hybrid model: %.3f [rho: %.3f]" %
(np.sqrt(mean_squared_error(ratings_test2.Rating,
pred_hybrid,
squared=True)),
np.corrcoef(ratings_test2.Rating, pred_hybrid)[0,1]))
pred_contentbased = m_contentbased.predict_new(user_attr_test.loc[ratings_test2.UserId],
item_attr_test.loc[ratings_test2.ItemId])
print("RMSE type 2 content-based model: %.3f [rho: %.3f]" %
(np.sqrt(mean_squared_error(ratings_test2.Rating,
pred_contentbased,
squared=True)),
np.corrcoef(ratings_test2.Rating, pred_contentbased)[0,1]))
```

RMSE for items which were in the training data but users which were not:

In [27]:

```
pred_hybrid = m_collective.predict_cold_multiple(item=ratings_test3.ItemId,
U=user_attr_test.loc[ratings_test3.UserId])
print("RMSE type 3 hybrid model: %.3f [rho: %.3f]" %
(np.sqrt(mean_squared_error(ratings_test3.Rating,
pred_hybrid,
squared=True)),
np.corrcoef(ratings_test3.Rating, pred_hybrid)[0,1]))
pred_contentbased = m_contentbased.predict_new(user_attr_test.loc[ratings_test3.UserId],
item_attr_test.loc[ratings_test3.ItemId])
print("RMSE type 3 content-based model: %.3f [rho: %.3f]" %
(np.sqrt(mean_squared_error(ratings_test3.Rating,
pred_contentbased,
squared=True)),
np.corrcoef(ratings_test3.Rating, pred_contentbased)[0,1]))
```

RMSE for users and items which were not in the training data:

In [28]:

```
pred_contentbased = m_contentbased.predict_new(user_attr_test.loc[ratings_test4.UserId],
item_attr_test.loc[ratings_test4.ItemId])
print("RMSE type 4 content-based model: %.3f [rho: %.3f]" %
(np.sqrt(mean_squared_error(ratings_test4.Rating,
pred_contentbased,
squared=True)),
np.corrcoef(ratings_test4.Rating, pred_contentbased)[0,1]))
```

In addition to external side information about the users and items, one can also generate features from the same $\mathbf{X}$ data by considering which movies a user rated and which ones didn't - these are taken as binary features, with the zeros being counted towards the loss/objective function.

The package offers an easy option for automatically generating these features on-the-fly, which can then be used in addition to the external features. The full model now becomes: $$ \mathbf{X} \approx \mathbf{A} \mathbf{B}^T + \mu + \mathbf{b}_A + \mathbf{b}_B $$ $$ \mathbf{I}_x \approx \mathbf{A} \mathbf{B}_i^T, \:\: \mathbf{I}_x^T \approx \mathbf{B} \mathbf{A}_i^T $$ $$ \mathbf{U} \approx \mathbf{A} \mathbf{C}^T + \mathbf{\mu}_U ,\:\:\:\: \mathbf{I} \approx \mathbf{B} \mathbf{D}^T + \mathbf{\mu}_I $$

Where:

- $\mathbf{I}_x$ is a binary matrix having a 1 at position ${i,j}$ if $x_{ij}$ is not missing, and a zero otherwise.
- $\mathbf{A}_i$ and $\mathbf{B}_i$ are the implicit feature matrices.

While in the earlier models, every user/item had the same regularization applied on its factors, it's also possible to make this regularization adjust itself according to the number of ratings for each user movie, which tends to produce better models at the expense of more hyperparameter tuning.

As well, the package offers an ALS-Cholesky solver, which is slower but tends to give better end results. This section will now use the implicit features and the Cholesky solver, and compare the new models to the previous ones.

In [29]:

```
m_implicit = CMF(k=40, add_implicit_features=True,
lambda_=0.05, scale_lam=True,
w_main=0.7, w_implicit=1., use_cg=False)\
.fit(X=ratings_train)
m_implicit_plus_collective = \
CMF(k=40, add_implicit_features=True, use_cg=False,
lambda_=0.03, scale_lam=True,
w_main=0.5, w_user=0.3, w_item=0.3, w_implicit=1.)\
.fit(X=ratings_train,
U=user_attr_train,
I=item_attr_train)
pred_ratingsonly = m_classic.predict(ratings_test1.UserId, ratings_test1.ItemId)
print("RMSE type 1 ratings-only model: %.3f [rho: %.3f]" %
(np.sqrt(mean_squared_error(ratings_test1.Rating,
pred_ratingsonly,
squared=True)),
np.corrcoef(ratings_test1.Rating, pred_ratingsonly)[0,1]))
pred_implicit = m_implicit.predict(ratings_test1.UserId, ratings_test1.ItemId)
print("RMSE type 1 ratings + implicit + dyn + Chol: %.3f [rho: %.3f]" %
(np.sqrt(mean_squared_error(ratings_test1.Rating,
pred_implicit,
squared=True)),
np.corrcoef(ratings_test1.Rating, pred_implicit)[0,1]))
pred_hybrid = m_collective.predict(ratings_test1.UserId, ratings_test1.ItemId)
print("RMSE type 1 hybrid model: %.3f [rho: %.3f]" %
(np.sqrt(mean_squared_error(ratings_test1.Rating,
pred_hybrid,
squared=True)),
np.corrcoef(ratings_test1.Rating, pred_hybrid)[0,1]))
pred_implicit_plus_collective = m_implicit_plus_collective.\
predict(ratings_test1.UserId, ratings_test1.ItemId)
print("RMSE type 1 hybrid + implicit + dyn + Chol: %.3f [rho: %.3f]" %
(np.sqrt(mean_squared_error(ratings_test1.Rating,
pred_implicit_plus_collective,
squared=True)),
np.corrcoef(ratings_test1.Rating, pred_implicit_plus_collective)[0,1]))
```

But note that, while the dynamic regularization and Cholesky method usually lead to improvements in general, the newly-added implicit features oftentimes result in worse cold-start predictions:

In [30]:

```
pred_hybrid = m_collective.predict_new(ratings_test2.UserId,
item_attr_test.loc[ratings_test2.ItemId])
print("RMSE type 2 hybrid model: %.3f [rho: %.3f]" %
(np.sqrt(mean_squared_error(ratings_test2.Rating,
pred_hybrid,
squared=True)),
np.corrcoef(ratings_test2.Rating, pred_hybrid)[0,1]))
pred_implicit_plus_collective = \
m_implicit_plus_collective\
.predict_new(ratings_test2.UserId,
item_attr_test.loc[ratings_test2.ItemId])
print("RMSE type 2 hybrid model + implicit + dyn + Chol: %.3f [rho: %.3f] (might get worse)" %
(np.sqrt(mean_squared_error(ratings_test2.Rating,
pred_implicit_plus_collective,
squared=True)),
np.corrcoef(ratings_test2.Rating, pred_implicit_plus_collective)[0,1]))
pred_contentbased = m_contentbased.predict_new(user_attr_test.loc[ratings_test2.UserId],
item_attr_test.loc[ratings_test2.ItemId])
print("RMSE type 2 content-based model: %.3f [rho: %.3f]" %
(np.sqrt(mean_squared_error(ratings_test2.Rating,
pred_contentbased,
squared=True)),
np.corrcoef(ratings_test2.Rating, pred_contentbased)[0,1]))
```

In [31]:

```
pred_hybrid = m_collective.predict_cold_multiple(item=ratings_test3.ItemId,
U=user_attr_test.loc[ratings_test3.UserId])
print("RMSE type 3 hybrid model: %.3f [rho: %.3f]" %
(np.sqrt(mean_squared_error(ratings_test3.Rating,
pred_hybrid,
squared=True)),
np.corrcoef(ratings_test3.Rating, pred_hybrid)[0,1]))
pred_implicit_plus_collective = \
m_implicit_plus_collective\
.predict_cold_multiple(item=ratings_test3.ItemId,
U=user_attr_test.loc[ratings_test3.UserId])
print("RMSE type 3 hybrid model + implicit + dyn + Chol: %.3f [rho: %.3f] (got worse)" %
(np.sqrt(mean_squared_error(ratings_test3.Rating,
pred_implicit_plus_collective,
squared=True)),
np.corrcoef(ratings_test3.Rating, pred_implicit_plus_collective)[0,1]))
pred_contentbased = m_contentbased.predict_new(user_attr_test.loc[ratings_test3.UserId],
item_attr_test.loc[ratings_test3.ItemId])
print("RMSE type 3 content-based model: %.3f [rho: %.3f]" %
(np.sqrt(mean_squared_error(ratings_test3.Rating,
pred_contentbased,
squared=True)),
np.corrcoef(ratings_test3.Rating, pred_contentbased)[0,1]))
```

- Cortes, David. "Cold-start recommendations in Collective Matrix Factorization." arXiv preprint arXiv:1809.00366 (2018).
- Singh, Ajit P., and Geoffrey J. Gordon. "Relational learning via collective matrix factorization." Proceedings of the 14th ACM SIGKDD international conference on Knowledge discovery and data mining. ACM, 2008.
- Takacs, Gabor, Istvan Pilaszy, and Domonkos Tikk. "Applications of the conjugate gradient method for implicit feedback collaborative filtering." Proceedings of the fifth ACM conference on Recommender systems. 2011.
- Rendle, Steffen, Li Zhang, and Yehuda Koren. "On the difficulty of evaluating baselines: A study on recommender systems." arXiv preprint arXiv:1905.01395 (2019).
- Zhou, Yunhong, et al. "Large-scale parallel collaborative filtering for the netflix prize." International conference on algorithmic applications in management. Springer, Berlin, Heidelberg, 2008.