Skip to content

Instantly share code, notes, and snippets.

View akashgit's full-sized avatar

Akash Srivastava akashgit

  • MIT, IBM, University of Edinburgh, previously Microsoft Research and Microsoft
  • Cambridge, US | Edinburgh | Reading | Sheffield, UK
View GitHub Profile
@koshian2
koshian2 / vae.py
Created September 13, 2018 17:59
Simple Variational Auto Encoder in PyTorch : MNIST, Fashion-MNIST, CIFAR-10, STL-10 (by Google Colab)
import torch
import torch.nn as nn
import torch.optim as optim
import torch.nn.functional as F
import torchvision
import torchvision.transforms as transforms
from torchvision.utils import save_image
from torchvision.datasets import MNIST, FashionMNIST, CIFAR10, STL10
import os
import pickle
@Mahedi-61
Mahedi-61 / cuda_11.8_installation_on_Ubuntu_22.04
Last active May 4, 2024 14:18
Instructions for CUDA v11.8 and cuDNN 8.9.7 installation on Ubuntu 22.04 for PyTorch 2.1.2
#!/bin/bash
### steps ####
# Verify the system has a cuda-capable gpu
# Download and install the nvidia cuda toolkit and cudnn
# Setup environmental variables
# Verify the installation
###
### to verify your gpu is cuda enable check
@eamartin
eamartin / notebook.ipynb
Last active November 6, 2022 18:53
Understanding & Visualizing Self-Normalizing Neural Networks
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
import numpy as np
from keras.models import Sequential
from keras.layers.core import Dense, Activation, Dropout, TimeDistributedDense
from keras.layers.recurrent import LSTM
text = open('/home/russellp/w/data/citation-graph/abstracts.txt', 'r').read()
char_to_idx = { ch: i for (i, ch) in enumerate(sorted(list(set(text)))) }
idx_to_char = { i: ch for (ch, i) in char_to_idx.items() }
vocab_size = len(char_to_idx)
@leconteur
leconteur / cumsum.py
Created February 3, 2016 18:23
cumsum function in tensorflow
def cumsum(softmax):
values = tf.split(1, softmax.get_shape()[1], softmax)
out = []
prev = tf.zeros_like(values[0])
for val in values:
s = prev + val
out.append(s)
prev = s
cumsum = tf.concat(1, out)
return cumsum