Instantly share code, notes, and snippets.
This is wonderful, thank you @jbochi for sharing! Though, I would not that it needs:
pip install implicit
pip install --upgrade google-api-python-client
and creation in BigQuery a project with a name in project_id .
For the script, at least in Python 3 (except for iteritems -> items), recalculate_user=True does not work and needs to be removed, and for some reason model.explain does not exist (do you use the lastest version of implicit?).
Sorry, something went wrong.
Hey @stared. Thanks for your comments. Please make sure you are using the latest version of implicit. It adds recalculate_user=True and .explain support.
hi @jbochi, i try to implement this with other dataset (user/items (fruits)/ratings) but only use user/items, i got some result but i dont know if the results are correct. I use this dataset https://github.com/juanremi/datasets-to-CF-recsys/blob/master/bigquery-frutas-sin-items-no-vistos.csv
And the code is here: https://github.com/juanremi/datasets-to-CF-recsys/blob/master/bigquery/bigquery2.py
can u gime some ideas? thanks
Hi @juanremi. Sorry for the late reply. Your code looks fine, but 50 factors for a dataset so small is probably way too much. You should do some validation to make sure the results make sense. You can, for instance, compute the mean squared error for ratings in the validation set.
Hi jbochi, in your recommendations.ipynb, could you please explain this part?
confidence = 40
model.fit(confidence * stars)
I am not sure why we are setting confidence to 40 and what confidence even means? Based on this, it looks like you are filling in the matrix with values of 40 for the items that are not sparse. Is that true? Can we instead fill it with actual number of stars (a number between 10 and 150)?
@ptiwaree stars is a sparse matrix with 0s and 1s. By multiplying by confidence, we are just giving positive examples a fixed high weight. The best value should be determined by cross-validation. You can also try this heuristic: benfred/implicit#74 (comment)
Hi @jbochi , i'm trying to use this code to evaluate on another dataset but I'm getting a bunch of Index out of bounds errors because the factors in the train data are not the same as in the test data. This is probably because the Users/Items in train are different than the ones seen in test.
How would you adapt this code to face this challenge? Or is my theory incorrect? /cc @antonioalegria
I also have problems with index out of bounds, did you figure it out @antonioalegria?
Do you know what version of the libraries you ran this with @jbochi? Running now with
My original dataset is of shape:
<20210x4324 sparse matrix of type '<class 'numpy.float64'>'
with 116992 stored elements in COOrdinate format>
and the truth variable in ndcg_scorer transforms the test split to shape (20206, 4324), while the predictions variable is of shape (20210, 4310).
So this is what's causing the index out of bounds error.
Edit: By changing the p variable, I managed to correct the predictions shape, but I don't understand why the truth variable is of shape (20206, 4324). My guess is the same as yours @antonioalegria, that in LeavePOutByGroup, in one of the splits there are users that don't have purchased some products, hence the full dimensions are not restored in truth
Okay, by filtering out purchases with fewer customers than x (trying out different values), I managed to get a correct truth shape, but now the predictions shape is off. Aah... :) Do you know of any heuristic @jbochi?
From my understanding, p in LeavePOutByGroup() should be <= to the (minimum number of items per user)/2.
For exemple, if your dataset has a user with activity for only 4 items, p should be <= 2.
Either you rebuild your dataset to include only users with activity for more products, or you filter out users with less than p*2 products from the test sets.
Hope that makes sense
It resolved the index out of bound error on my end.
If my dataset is mostly just 2 items per users, I assume LeavePOutByGroup is not the way to go? Because if I understand correctly, this would mean that each split would have mostly 1 item per users and therefore the model has nothing to learn.
@jbochi what is the license on this gist?