Skip to content

Instantly share code, notes, and snippets.

@czotti
czotti / install.sh
Last active November 20, 2015 15:30
Archlinux kimsuffi install
# prepare docker on big hard drive
mkdir /home/docker
cd /var/lib/
ln -s /home/docker .
# Update system
pacman -Syu --noconfirm
# install tools
pacman -S --noconfirm vim emacs docker nginx iptables
@czotti
czotti / comp.cpp
Last active December 4, 2015 16:45
#include <vector>
#include <random>
#include <set>
#include <map>
#include <algorithm>
#include <chrono>
#include <cstdlib>
#include <iostream>
#include <iomanip>
@czotti
czotti / rmsprop.py
Created March 16, 2016 15:32
Pylearn2 RMSProp
def get_updates(self, learning_rate, grads, lr_scalers=None):
"""
Provides the symbolic (theano) description of the updates needed to
perform this learning rule. See Notes for side-effects.
Parameters
----------
learning_rate : float
Learning rate coefficient.
grads : dict
A dictionary mapping from the model's parameters to their
@czotti
czotti / reading_nifti.py
Created April 4, 2016 14:14
reading nifti(or supported format) with nibabel
import nibabel as nib
def read_file(filename):
"""
Args:
-----
filename(str): path of your file
Output:
-------
@czotti
czotti / tiramisu.ipynb
Last active November 28, 2018 06:03
Tiramisu keras implementation
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
@czotti
czotti / Cargo.toml
Last active June 26, 2018 20:07
Cargo run with approx for a slice of Complex
[package]
name = "approx"
version = "0.1.0"
authors = ["Clément Zotti <clement.zotti@imeka.ca>"]
[dependencies]
ndarray = ">=0.10.12,<0.12.0"
num = "0.1"
[dependencies.approx]
@czotti
czotti / main.nf
Last active June 29, 2018 17:54
Nextflow pipeline to test if the CUDA_VISIBLE_DEVICES is set.
#!/usr/bin/env nextflow
modes = [0, 1, 2, 3, 4]
process gpu_output {
echo true
script:
"""
echo \$CUDA_VISIBLE_DEVICES
"""
@czotti
czotti / Cargo.toml
Created July 17, 2018 13:05
Change num to num-complex with approx
[package]
name = "approx"
version = "0.1.0"
authors = ["Clément Zotti <clement.zotti@imeka.ca>"]
[dependencies]
ndarray = ">=0.10.12,<0.12.0"
num-complex = "0.2.0"
[dependencies.approx]
@czotti
czotti / output.txt
Last active December 8, 2018 14:14
Issues with speed pytorch 0.4.1 and 1.0
************************************** TENSOR SIZE (1, 2, 256, 256, 256) **************************************
Iter 1/5, loss: 1.1791579723358154
----------------------------------- --------------- --------------- --------------- --------------- ---------------
Name CPU time CUDA time Calls CPU total CUDA total
----------------------------------- --------------- --------------- --------------- --------------- ---------------
convolution 1993651.502us 1992817.026us 4 7974606.009us 7971268.105us
add 1618.376us 6155.640us 4 6473.504us 24622.559us
torch::autograd::AccumulateGrad 6500.853us 7.487us 6 39005.119us 44.922us
is_floating_point 9.718us 7.812us 1 9.718us 7.812us
cudnn_convolution_backward
import torch
import torchvision
model = torchvision.models.resnet18(pretrained=True)
def bn_to_in_inplace(model):
for name, module in model.named_children():
if isinstance(module, torch.nn.BatchNorm2d):
setattr(model, name, torch.nn.InstanceNorm2d(module.num_features))
else: