Skip to content

Instantly share code, notes, and snippets.

Embed
What would you like to do?
Recommending GitHub repositories with Google Big Query and implicit library: https://medium.com/@jbochi/recommending-github-repositories-with-google-bigquery-and-the-implicit-library-e6cce666c77
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
@stared

This comment has been minimized.

Copy link

commented Jun 25, 2017

This is wonderful, thank you @jbochi for sharing! Though, I would not that it needs:

pip install implicit
pip install --upgrade google-api-python-client

and creation in BigQuery a project with a name in project_id .

For the script, at least in Python 3 (except for iteritems -> items), recalculate_user=True does not work and needs to be removed, and for some reason model.explain does not exist (do you use the lastest version of implicit?).

@jbochi

This comment has been minimized.

Copy link
Owner Author

commented Jul 3, 2017

Hey @stared. Thanks for your comments. Please make sure you are using the latest version of implicit. It adds recalculate_user=True and .explain support.

@juanremi

This comment has been minimized.

Copy link

commented Jul 28, 2017

hi @jbochi, i try to implement this with other dataset (user/items (fruits)/ratings) but only use user/items, i got some result but i dont know if the results are correct. I use this dataset https://github.com/juanremi/datasets-to-CF-recsys/blob/master/bigquery-frutas-sin-items-no-vistos.csv
And the code is here: https://github.com/juanremi/datasets-to-CF-recsys/blob/master/bigquery/bigquery2.py

can u gime some ideas? thanks

@jbochi

This comment has been minimized.

Copy link
Owner Author

commented Aug 22, 2017

Hi @juanremi. Sorry for the late reply. Your code looks fine, but 50 factors for a dataset so small is probably way too much. You should do some validation to make sure the results make sense. You can, for instance, compute the mean squared error for ratings in the validation set.

@ptiwaree

This comment has been minimized.

Copy link

commented Jan 17, 2018

Hi jbochi, in your recommendations.ipynb, could you please explain this part?

confidence = 40
model.fit(confidence * stars)

I am not sure why we are setting confidence to 40 and what confidence even means? Based on this, it looks like you are filling in the matrix with values of 40 for the items that are not sparse. Is that true? Can we instead fill it with actual number of stars (a number between 10 and 150)?

@jbochi

This comment has been minimized.

Copy link
Owner Author

commented Feb 9, 2018

@ptiwaree stars is a sparse matrix with 0s and 1s. By multiplying by confidence, we are just giving positive examples a fixed high weight. The best value should be determined by cross-validation. You can also try this heuristic: benfred/implicit#74 (comment)

@antonioalegria

This comment has been minimized.

Copy link

commented Jun 14, 2018

Hi @jbochi , i'm trying to use this code to evaluate on another dataset but I'm getting a bunch of Index out of bounds errors because the factors in the train data are not the same as in the test data. This is probably because the Users/Items in train are different than the ones seen in test.

How would you adapt this code to face this challenge? Or is my theory incorrect? /cc @antonioalegria

@joddm

This comment has been minimized.

Copy link

commented Nov 2, 2018

I also have problems with index out of bounds, did you figure it out @antonioalegria?

Do you know what version of the libraries you ran this with @jbochi? Running now with

pandas: 0.23.4
numpy: 1.15.2
scipy: 1.1.0
implicit: 0.3.8
sklearn: 0.20.0

My original dataset is of shape:

<20210x4324 sparse matrix of type '<class 'numpy.float64'>'
	with 116992 stored elements in COOrdinate format>

and the truth variable in ndcg_scorer transforms the test split to shape (20206, 4324), while the predictions variable is of shape (20210, 4310).

So this is what's causing the index out of bounds error.

Edit: By changing the p variable, I managed to correct the predictions shape, but I don't understand why the truth variable is of shape (20206, 4324). My guess is the same as yours @antonioalegria, that in LeavePOutByGroup, in one of the splits there are users that don't have purchased some products, hence the full dimensions are not restored in truth

Okay, by filtering out purchases with fewer customers than x (trying out different values), I managed to get a correct truth shape, but now the predictions shape is off. Aah... :) Do you know of any heuristic @jbochi?

@seb799

This comment has been minimized.

Copy link

commented Jan 17, 2019

@joddm @antonioalegria
From my understanding, p in LeavePOutByGroup() should be <= to the (minimum number of items per user)/2.
For exemple, if your dataset has a user with activity for only 4 items, p should be <= 2.

Either you rebuild your dataset to include only users with activity for more products, or you filter out users with less than p*2 products from the test sets.

Hope that makes sense

It resolved the index out of bound error on my end.

See also

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.