Skip to content

Instantly share code, notes, and snippets.

Christophe Cerisara cerisara

  • LORIA - CNRS
  • Nancy, France
View GitHub Profile
View Arxiv2Kindle.sh
#!/bin/bash
# arxiv paper ID
python2 getarxivid.py $1 > arxivids
mkdir papers
for i in $(cat arxivids)
do
@cerisara
cerisara / tex2md.py
Created Oct 23, 2016
Fast convert latex to markdown (part of Arxiv2Kindle)
View tex2md.py
import sys
def getText(l):
l=l.replace('\\citep','')
l=l.replace('\\cite','')
return l
with open(sys.argv[1],'rb') as f : ls = f.readlines()
empty=False
View gist:d43e9374a3d2eb9d44487606e0e29966
Data Parallelization with multi-GPU over TensorFlow
Jonathan Laserson <jonilaserson@gmail.com>
9 oct. (Il y a 2 jours)
À Keras-users Se désabonner
Here is how to take an existing model and do data parallelization across multiple GPUs.
@cerisara
cerisara / gitolab.php
Created Aug 31, 2016 — forked from benoitzohar/gitolab.php
Migrate repositories from Gitolite to GitLab.
View gitolab.php
#!/usr/bin/php -qC
<?php
/******************************************************************************
*
* @file gitolab.php
* @author Benoit Zohar
* @link http://benoitzohar.fr/
* @last-edited 2015-01-09
* @description Migrate projects from Gitolite to GitLab
@cerisara
cerisara / DA reco
Created Mar 8, 2016
Dialogue act recognition Keras model
View DA reco
import numpy as np
from keras.preprocessing import sequence
from keras.models import Sequential
from keras.layers.core import Dense, Dropout, Activation, Flatten, TimeDistributedDense
from keras.layers.recurrent import LSTM
from keras.layers.embeddings import Embedding
from keras.utils import np_utils
from keras.preprocessing.text import Tokenizer
from keras.models import Graph
View genseq.py
complete_sentences = [["*-START-*"] for a in range(1000)]
sents = np.zeros((nb_samples, timesteps+1, len(vocab)))
for x in range(nb_samples):
sents[i,0,word2index["*-START-*"]] = 1. # init the sequences
for t in range(timesteps):
preds = self.model.predict(sents[:,0:t+1], verbose=0)
# get the maximum predictions for this timestep for each sample
next_word_indices = np.argmax(preds[:,t], axis=1)
View Easy regression ? not so sure...
import numpy as np
import matplotlib.pyplot as plt
from keras.models import Sequential
from keras.layers.core import Dense, Activation
from keras.optimizers import SGD
from sklearn.metrics import mean_squared_error
tau=2*np.pi
@cerisara
cerisara / lstm_keras.py
Last active Jun 15, 2018
LSTM training multiclass with Keras
View lstm_keras.py
# X_train contains word indices (single int between 0 and max_words)
# Y_train0 contains class indices (single int between 0 and nb_classes)
X_train = sequence.pad_sequences(X_train, maxlen=maxlen, padding='post')
X_test = sequence.pad_sequences(X_test, maxlen=maxlen, padding='post')
Y_train = np.zeros((batchSize,globvars.nb_classes))#,dtype=np.float32)
for t in range(batchSize):
Y_train[t][Y_train0[t]]=1
Y_test = np.zeros((len(Y_test0),globvars.nb_classes))#,dtype=np.float32)
@cerisara
cerisara / IterativeReduceFlatMap.java
Last active Sep 4, 2015
Alternative flattening/deflattening of parameters in DL4J / Spark
View IterativeReduceFlatMap.java
/**
* Iterative reduce with
* flat map using map partitions
*
* @author Adam Gibson
modified by Christophe Cerisara
*/
public class IterativeReduceFlatMap implements FlatMapFunction<Iterator<DataSet>, INDArray> {
You can’t perform that action at this time.