Skip to content

Instantly share code, notes, and snippets.

Working on TensorFlow Ranking

Alex Egg eggie5

View GitHub Profile

The saved model CLI can't handle string input w/ the input_examples flag RE: :

saved_model_cli run \
--dir . \
--tag_set serve \
--signature_def predict \
--input_examples 'examples=[{"menu_item":["this is a sentence"]}]'
import org.tensorflow.Graph;
import org.tensorflow.Session;
import org.tensorflow.Tensor;
import org.tensorflow.Tensors;
import org.tensorflow.TensorFlow;
import org.tensorflow.SavedModelBundle;
import org.tensorflow.SavedModelBundle.Loader;
import org.tensorflow.framework.SignatureDef;
import org.tensorflow.framework.MetaGraphDef;
import org.tensorflow.framework.TensorInfo;

Learning to Rank

A common method to rank a set of items is to pass all items through a scoring function and then sorting the scores to get an overall rank. Traditionally this space has been domianted by ordinal regression techniques on point-wise data. However, there are serious advantages to exploit by learning a scoring function on pair-wise data instead. This technique commonly called RankNet was originally explored by the seminal Learning to Rank by Gradient Descent[^1] paper by Microsoft.

In this talk we will discuss:

  • Theory behind point-wise and pair-wise data

  • Ordinal Regression: ranking point-wise data

  • how to crowd-source pair-wise data

import numpy as np
from sklearn.base import BaseEstimator
from keras.layers import Input, Embedding, Dense,Flatten ,Activation, Add, Dot
from keras.models import Model
from keras.regularizers import l2 as l2_reg
from keras import initializers
import itertools
def build_model(max_features,K=8,solver='adam',l2=0.0,l2_fm = 0.0):
View IRI-case-study.ipynb
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
View findphone.ipynb
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
eggie5 /
Created Dec 11, 2017 — forked from omoindrot/
Example TensorFlow script for fine-tuning a VGG model (uses
Example TensorFlow script for finetuning a VGG model on your own data.
Uses module which is in release v1.2
Based on PyTorch example from Justin Johnson
Required packages: tensorflow (v1.2)
Download the weights trained on ImageNet for VGG:
### We will try to seralize and desearlaize a graph that is using the new `get_single_element` function of the Dataset API
### You will see that it does not desearlize gracefully.
#### Part 1: Build arbitrary graph using Dataset API and new get_single_element function
import numpy as np
import tensorflow as tf
from import Dataset, Iterator
View tf-ml.log
INFO 2017-02-27 09:29:43 -0800 master-replica-0 Creating TensorFlow device (/gpu:0) -> (device: 0, name: Tesla K80, pci bus id: 0000:00:04.0)
INFO 2017-02-27 10:51:58 -0800 master-replica-0 max_u_id: 6040
INFO 2017-02-27 10:51:58 -0800 master-replica-0 max_i_id: 3952
INFO 2017-02-27 10:51:58 -0800 master-replica-0 epoch: 1
INFO 2017-02-27 10:51:58 -0800 master-replica-0 bpr_loss: 0.718087361238
INFO 2017-02-27 10:51:58 -0800 master-replica-0 test_loss: 0.941205 test_auc: 0.633492314296
INFO 2017-02-27 10:51:58 -0800 master-replica-0 epoch: 2
INFO 2017-02-27 10:51:58 -0800 master-replica-0 bpr_loss: 0.706146094591
INFO 2017-02-27 10:51:58 -0800 master-replica-0 test_loss: 0.933789 test_auc: 0.701774210244
INFO 2017-02-27 10:51:58 -0800 master-replica-0 epoch: 3


Let's start by pointing out that the method usually referred to as "SVD" that is used in the context of recommendations is not strictly speaking the mathematical Singular Value Decomposition of a matrix but rather an approximate way to compute the low-rank approximation of the matrix by minimizing the squared error loss. A more accurate, albeit more generic, way to call this would be Matrix Factorization. 


The basic idea is that we want to decompose our original and very sparse matrix into two low-rank matrices that represent user factors and item factors. This is done by using an iterative approach to minimize the loss function. The most common way to do this is by using Stochastic Gradient Descent, but others such as ALS are also possible. The actual loss function to minimize includes a general bias term and  two bias for both the user and the item. 


You can’t perform that action at this time.