Skip to content

Instantly share code, notes, and snippets.

View hamaadshah's full-sized avatar
:octocat:

Hamaad Shah hamaadshah

:octocat:
View GitHub Profile
@vlandham
vlandham / part1.md
Last active March 21, 2024 12:57
Feature Branches and Pull Requests : Walkthrough

Here's a little walkthrough of how Yannick and I are using feature branches and pull requests to develop new features and adding them to the project. Below are the steps I take when working on a new feature. Hopefully this, along with watching the process on Github, will serve as a starting point to having everyone use a similar workflow.

Questions, comments, and suggestions for improvements welcome!

Start with the latest on master

When starting a new feature, I make sure to start with the latest and greatest codebase:

git checkout master
@bartolsthoorn
bartolsthoorn / multilabel_example.py
Created April 29, 2017 12:13
Simple multi-laber classification example with Pytorch and MultiLabelSoftMarginLoss (https://en.wikipedia.org/wiki/Multi-label_classification)
import torch
import torch.nn as nn
import numpy as np
import torch.optim as optim
from torch.autograd import Variable
# (1, 0) => target labels 0+2
# (0, 1) => target labels 1
# (1, 1) => target labels 3
train = []
@naotokui
naotokui / GAN-and-trainable.py
Last active October 14, 2021 19:46
How model.trainable = False works in keras (GAN model)
# coding: utf8
## based on this article: http://qiita.com/mokemokechicken/items/937a82cfdc31e9a6ca12
import numpy as np
from keras.models import Sequential
from keras.engine.topology import Input, Container
from keras.engine.training import Model
from keras.layers.core import Dense
@markito
markito / gist:a8ffcdde8cf8ebb0e69ead2363902f06
Created September 15, 2017 14:14
Spark GC/memory settings
How much memory is permanently in memory vs how much is used for transformations
(ratio)
spark.storage.memoryFraction
Suggested settings... (need to debug the logs after these settings)
-XX:+UseG1GC -XX:+PrintFlagsFinal -XX:+PrintReferenceGC -verbose:gc
-XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintAdaptiveSizePolicy
-XX:+UnlockDiagnosticVMOptions -XX:+G1SummarizeConcMark
-Xms88g -Xmx88g -XX:InitiatingHeapOccupancyPercent=35
-XX:ConcGCThread=15 -XX:+AlwaysPreTouch
@thomwolf
thomwolf / prepare_packed_sequence.py
Created October 3, 2017 10:55
Preparer a pyTorch PackedSequence for a batch of sequences
# input_seqs is a batch of input sequences as a numpy array of integers (word indices in vocabulary) padded with zeroas
input_seqs = Variable(torch.from_numpy(input_seqs.astype('int64')).long())
# First: order the batch by decreasing sequence length
input_lengths = torch.LongTensor([torch.max(input_seqs[i, :].data.nonzero()) + 1 for i in range(input_seqs.size()[0])])
input_lengths, perm_idx = input_lengths.sort(0, descending=True)
input_seqs = input_seqs[perm_idx][:, :input_lengths.max()]
# Then pack the sequences
packed_input = pack_padded_sequence(input_seqs, input_lengths.cpu().numpy(), batch_first=True)
@MattKleinsmith
MattKleinsmith / get_train_valid_loader.py
Last active January 30, 2023 06:40 — forked from kevinzakka/data_loader.py
Train, Validation and Test Split for torchvision MNIST Dataset
# https://gist.github.com/kevinzakka/d33bf8d6c7f06a9d8c76d97a7879f5cb#file-data_loader-py
# This is an example for the MNIST dataset (formerly CIFAR-10).
# There's a function for creating a train and validation iterator.
# There's also a function for creating a test iterator.
# Inspired by https://discuss.pytorch.org/t/feedback-on-pytorch-for-kaggle-competitions/2252/4
# Adapted for MNIST by github.com/MatthewKleinsmith
import numpy as np
import torch
@ruslangrimov
ruslangrimov / keras-tensorflow-model-profiling.py
Last active October 15, 2019 20:51
Profiling a Keras-TensorFlow model
import tensorflow as tf
from tensorflow.python.client import timeline
from keras import backend as K
run_options = tf.RunOptions(trace_level=tf.RunOptions.FULL_TRACE)
run_metadata = tf.RunMetadata()
model = ... # A Keras model
fn = K.function(model.inputs, model.outputs, options=run_options, run_metadata=run_metadata)
@mattiasarro
mattiasarro / tensorflow_1_6_high_sierra_gpu.md
Last active August 24, 2022 15:24 — forked from orpcam/tensorflow_1_6_rc1_high_sierra_gpu.md
Install Tensorflow 1.6 on macOS High Sierra 10.13.3 with GPU Acceleration (without disabling SIP)

Tensorflow 1.6 on macOS High Sierra 10.13.3 with GPU Acceleration (without disabling SIP)

This gist (based on a blog post at byai.io) documents how to set up TensorFlow 1.6 with (e)GPU support without the need to disable SIP. Following the original gist got me a saystem in which training TF on eGPU was successful, but there were various visual glitches due to the newer / less stable version of the driver.

As pointed out by ronchigram, many people are having issues with newer NVIDIA drivers, so it's worth using the nvidia-update script by Benjamin Dobell that installs the latest stable NVIDIA web driver, and if necessary patches it to run on your system. We also don't need to disable SIP when using nvidia-update.

I have als

@jeremyjordan
jeremyjordan / sgdr.py
Last active December 4, 2023 13:41
Keras Callback for implementing Stochastic Gradient Descent with Restarts
from keras.callbacks import Callback
import keras.backend as K
import numpy as np
class SGDRScheduler(Callback):
'''Cosine annealing learning rate scheduler with periodic restarts.
# Usage
```python
schedule = SGDRScheduler(min_lr=1e-5,
@thomwolf
thomwolf / gradient_accumulation.py
Last active January 16, 2024 02:38
PyTorch gradient accumulation training loop
model.zero_grad() # Reset gradients tensors
for i, (inputs, labels) in enumerate(training_set):
predictions = model(inputs) # Forward pass
loss = loss_function(predictions, labels) # Compute loss function
loss = loss / accumulation_steps # Normalize our loss (if averaged)
loss.backward() # Backward pass
if (i+1) % accumulation_steps == 0: # Wait for several backward steps
optimizer.step() # Now we can do an optimizer step
model.zero_grad() # Reset gradients tensors
if (i+1) % evaluation_steps == 0: # Evaluate the model when we...