Skip to content

Instantly share code, notes, and snippets.

@gvtulder
gvtulder / deformation_coordinates.ipynb
Created August 5, 2019 10:37
elasticdeform deformation grid
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
@gvtulder
gvtulder / encrypting-without-reinstalling-ubuntu.txt
Last active February 8, 2023 03:22
Encrypting hard drives without reinstalling Ubuntu
Encrypting hard drives without reinstalling Ubuntu
===================================================
Gijs van Tulder, 26.02.2019, last update 04.03.2019
This is a list of the steps I took to encrypt the partitions on my
Ubuntu 16.04 laptop with the LUKS encryption system. Encrypting the
partitions took some time but was relatively easy. Getting the system
to boot afterwards was a little trickier, but doable.
Obviously, these steps might not work for you. Follow them at your
from __future__ import print_function
import numpy
import theano
import theano.tensor as T
from theano.tensor.nnet.abstract_conv import AbstractConv2d
from theano.tensor.nnet.abstract_conv import AbstractConv2d_gradInputs
from theano.tensor.nnet.corr import CorrMM_gradInputs
from theano.gpuarray.blas import GpuCorrMM_gradInputs
from theano.gpuarray.dnn import dnn_gradinput
@gvtulder
gvtulder / multi_inplace_demo.py
Created November 13, 2016 23:36
Multiple in-place optimizations
from __future__ import print_function
import time
from collections import OrderedDict
import numpy as np
import theano
import theano.tensor as T
import theano.gpuarray
@gvtulder
gvtulder / explanation.md
Created November 12, 2016 12:19
Theano string-dependent optimization bug

In this commit https://github.com/Theano/Theano/pull/5190/commits/772fbcb00ba03b8ef9c6c198f04bf519a91589b1 I 'fixed' a bug by renaming the property 'inplace_running_mean' to 'inplace_running_xxx'.

The problem seems to be in the optimization:

  1. There is a GpuDnnBatchNorm Op that takes as its first input a tensor ('x'). This tensor is then normalized and returned as the output. The GpuDnnBatchNorm Op has a few inplace parameters named inplace_running_mean, inplace_running_var and inplace_output. Setting inplace_output=True will modify the original input tensor 'x'.

  2. The gradient Op GpuDnnBatchNormGrad takes as input the original tensor ('x') and some outputs of the GpuDnnBatchNorm Op. This means that the grad Op has to run after the normalization Op.

  3. Obviously, if the normalization Op and the grad Op use the same input 'x', the normalization Op should not run with inplace_output=True.