Skip to content

Instantly share code, notes, and snippets.

Christophe Cerisara cerisara

  • Nancy, France
Block or report user

Report or block cerisara

Hide content and notifications from this user.

Learn more about blocking users

Contact Support about this user’s behavior.

Learn more about reporting abuse

Report abuse
View GitHub Profile
# arxiv paper ID
python2 $1 > arxivids
mkdir papers
for i in $(cat arxivids)
cerisara /
Created Oct 23, 2016
Fast convert latex to markdown (part of Arxiv2Kindle)
import sys
def getText(l):
return l
with open(sys.argv[1],'rb') as f : ls = f.readlines()
View gist:d43e9374a3d2eb9d44487606e0e29966
Data Parallelization with multi-GPU over TensorFlow
Jonathan Laserson <>
9 oct. (Il y a 2 jours)
À Keras-users Se désabonner
Here is how to take an existing model and do data parallelization across multiple GPUs.
cerisara / gitolab.php
Created Aug 31, 2016 — forked from benoitzohar/gitolab.php
Migrate repositories from Gitolite to GitLab.
View gitolab.php
#!/usr/bin/php -qC
* @file gitolab.php
* @author Benoit Zohar
* @link
* @last-edited 2015-01-09
* @description Migrate projects from Gitolite to GitLab
cerisara / DA reco
Created Mar 8, 2016
Dialogue act recognition Keras model
View DA reco
import numpy as np
from keras.preprocessing import sequence
from keras.models import Sequential
from keras.layers.core import Dense, Dropout, Activation, Flatten, TimeDistributedDense
from keras.layers.recurrent import LSTM
from keras.layers.embeddings import Embedding
from keras.utils import np_utils
from keras.preprocessing.text import Tokenizer
from keras.models import Graph
complete_sentences = [["*-START-*"] for a in range(1000)]
sents = np.zeros((nb_samples, timesteps+1, len(vocab)))
for x in range(nb_samples):
sents[i,0,word2index["*-START-*"]] = 1. # init the sequences
for t in range(timesteps):
preds = self.model.predict(sents[:,0:t+1], verbose=0)
# get the maximum predictions for this timestep for each sample
next_word_indices = np.argmax(preds[:,t], axis=1)
View Easy regression ? not so sure...
import numpy as np
import matplotlib.pyplot as plt
from keras.models import Sequential
from keras.layers.core import Dense, Activation
from keras.optimizers import SGD
from sklearn.metrics import mean_squared_error
cerisara /
Last active Jun 15, 2018
LSTM training multiclass with Keras
# X_train contains word indices (single int between 0 and max_words)
# Y_train0 contains class indices (single int between 0 and nb_classes)
X_train = sequence.pad_sequences(X_train, maxlen=maxlen, padding='post')
X_test = sequence.pad_sequences(X_test, maxlen=maxlen, padding='post')
Y_train = np.zeros((batchSize,globvars.nb_classes))#,dtype=np.float32)
for t in range(batchSize):
Y_test = np.zeros((len(Y_test0),globvars.nb_classes))#,dtype=np.float32)
cerisara /
Last active Sep 4, 2015
Alternative flattening/deflattening of parameters in DL4J / Spark
* Iterative reduce with
* flat map using map partitions
* @author Adam Gibson
modified by Christophe Cerisara
public class IterativeReduceFlatMap implements FlatMapFunction<Iterator<DataSet>, INDArray> {
You can’t perform that action at this time.