Skip to content

Instantly share code, notes, and snippets.

@mjdietzx
mjdietzx / reduce_lr_keras_issue.py
Last active April 8, 2018 02:27
Standalone script based of example in keras for issue
'''Trains a simple convnet on the MNIST dataset.
Gets to 99.25% test accuracy after 12 epochs
(there is still a lot of margin for parameter tuning).
16 seconds per epoch on a GRID K520 GPU.
'''
from __future__ import print_function
import numpy as np
np.random.seed(1337) # for reproducibility
@mjdietzx
mjdietzx / waya-dl-setup.sh
Last active March 13, 2024 15:08
Install CUDA Toolkit v8.0 and cuDNN v6.0 on Ubuntu 16.04
#!/bin/bash
# install CUDA Toolkit v8.0
# instructions from https://developer.nvidia.com/cuda-downloads (linux -> x86_64 -> Ubuntu -> 16.04 -> deb (network))
CUDA_REPO_PKG="cuda-repo-ubuntu1604_8.0.61-1_amd64.deb"
wget http://developer.download.nvidia.com/compute/cuda/repos/ubuntu1604/x86_64/${CUDA_REPO_PKG}
sudo dpkg -i ${CUDA_REPO_PKG}
sudo apt-get update
sudo apt-get -y install cuda
@mjdietzx
mjdietzx / gh_release_bamboo.sh
Last active March 29, 2021 15:19
Create and add pre-built artifacts to a GitHub release from CI server using the GitHub releases API.
#!/bin/bash
# creates a GitHub release (draft) and adds pre-built artifacts to the release
# after running this script user should manually check the release in GitHub, optionally edit it, and publish it
# args: :version_number (the version number of this release), :body (text describing the contents of the tag)
# example usage: ./gh_release_bamboo.sh "1.0.0" "Release notes: ..."
# => name: nRF5-ble-driver_<platform_name>_1.0.0_compiled-binaries.zip example: nRF5-ble-driver_win-64_2.0.1_compiled-binaries.zip
# to ensure that bash is used: https://answers.atlassian.com/questions/28625/making-a-bamboo-script-execute-using-binbash
@mjdietzx
mjdietzx / install-tesla-driver-ubuntu.sh
Last active December 23, 2023 11:03
Install TESLA driver for ubuntu 16.04
# http://www.nvidia.com/download/driverResults.aspx/117079/en-us
wget http://us.download.nvidia.com/tesla/375.51/nvidia-driver-local-repo-ubuntu1604_375.51-1_amd64.deb
sudo dpkg -i nvidia-driver-local-repo-ubuntu1604_375.51-1_amd64.deb
sudo apt-get update
sudo apt-get -y install cuda-drivers
echo "Reboot required."
@mjdietzx
mjdietzx / residual_network.py
Last active March 26, 2024 06:33
Clean and simple Keras implementation of residual networks (ResNeXt and ResNet) accompanying accompanying Deep Residual Learning: https://blog.waya.ai/deep-residual-learning-9610bb62c355.
"""
Clean and simple Keras implementation of network architectures described in:
- (ResNet-50) [Deep Residual Learning for Image Recognition](https://arxiv.org/pdf/1512.03385.pdf).
- (ResNeXt-50 32x4d) [Aggregated Residual Transformations for Deep Neural Networks](https://arxiv.org/pdf/1611.05431.pdf).
Python 3.
"""
from keras import layers
from keras import models
@mjdietzx
mjdietzx / residual_block.py
Last active September 18, 2021 11:21
Clean and simple Keras implementation of the residual block (non-bottleneck) accompanying Deep Residual Learning: https://blog.waya.ai/deep-residual-learning-9610bb62c355.
from keras import layers
def residual_block(y, nb_channels, _strides=(1, 1), _project_shortcut=False):
shortcut = y
# down-sampling is performed with a stride of 2
y = layers.Conv2D(nb_channels, kernel_size=(3, 3), strides=_strides, padding='same')(y)
y = layers.BatchNormalization()(y)
y = layers.LeakyReLU()(y)
@mjdietzx
mjdietzx / ResNeXt_gan.py
Last active February 14, 2020 18:10
Keras/tensorflow implementation of GAN architecture where generator and discriminator networks are ResNeXt.
from keras import layers
from keras import models
import tensorflow as tf
#
# generator input params
#
rand_dim = (1, 1, 2048) # dimension of the generator's input tensor (gaussian noise)
@mjdietzx
mjdietzx / ResNeXt_pytorch.py
Created May 3, 2017 18:32
pyt🔥rch implementation of ResNeXt
import torch
from torch.autograd import Variable
import torch.nn as nn
class Bottleneck(nn.Module):
cardinality = 32 # the size of the set of transformations
def __init__(self, nb_channels_in, nb_channels, nb_channels_out, stride=1):
super().__init__()
@mjdietzx
mjdietzx / improved_wGAN_loss.py
Last active January 7, 2021 05:02
tensorflow implementation of Wasserstein distance with gradient penalty
"""
wGAN implemented on top of tensorflow as described in: [Wasserstein GAN](https://arxiv.org/pdf/1701.07875.pdf)
with improvements as described in: [Improved Training of Wasserstein GANs](https://arxiv.org/pdf/1704.00028.pdf).
"""
import tensorflow as tf
#
@mjdietzx
mjdietzx / cross_entropy_loss.py
Created August 3, 2017 20:27
Cross entropy loss pytorch implementation
import torch
from torch import autograd
from torch import nn
class CrossEntropyLoss(nn.Module):
"""
This criterion (`CrossEntropyLoss`) combines `LogSoftMax` and `NLLLoss` in one single class.
NOTE: Computes per-element losses for a mini-batch (instead of the average loss over the entire mini-batch).