Skip to content

Instantly share code, notes, and snippets.

View mlzxy's full-sized avatar

Xinyu Zhang mlzxy

  • Rutgers University
  • New Brunswick, NJ
View GitHub Profile
@mlzxy
mlzxy / safe-rm.md
Last active March 23, 2022 08:21
Yet another `rm` alias

SAFE-RM

It behaves identically asrm except that it won't delete files with important or .keep (customizable through SAFE_RM_IMPORTANCE_MARK ). The idea is to mark important files beforehand so they can always survive future disk cleanup.

What triggered me to create this is I accidentally removed some baseline models, which I spent lots of GPU hours to train and all my current training jobs depend on. To prevent future disasters, I come up with this solution. Hope you find it useful too.

Demo

# in real-world scenaria, computational resource is usually fixed
# therefore, we only use time budget here
game_tree = Node(current_state)
while time_budget > 0:
start_time = time()
build(game_tree)
time_budget -= time() - start_time
# in cfr, we simulate all actions
v[I] = {a: cfr(h + [a], {**π_i, P(h): π_i[P(h)] * σ[t][I][a]}, i, t)
for a in A[I]}
# in outcome sampling mccfr, we only need to sample one a from A[I]
a = sample(A[I], σ[t][I]) # or use `ϵ * uniform + (1-ϵ) * σ[t][I]`
v[I][a] = mccfr(h + [a], {**π_i, P(h): π_i[P(h)] * σ[t][I][a]})
@mlzxy
mlzxy / cfr.py
Last active December 26, 2020 03:56
def cfr(
h=[], # history
π_i={}, # a mapping from player_id to π_i(h)
i=0, # player_id
t=0 # timestep
):
# here I skip the chance node case for simplicity
if h is terminal:
z = h
return u(i, z) # utility u(z) for player i
@mlzxy
mlzxy / gist:64ef62b4c54b7ea9478b21e95c5db153
Created March 26, 2019 10:23 — forked from fdob/gist:5983637
in-diff: Use inotify to capture changes to a file, and output diffs to STDOUT. Needs inotify-tools.
#!/bin/bash
# hook into inotify to watch a file, and generate
# diffs for changes in real-time
#
# requires inotify-tools (apt-get install inotify-tools)
#
# @author Filipe Dobreira <filipe.dobreira@ez.no>
usage() {
echo "Usage:"
echo " in-diff <file>|help"
@mlzxy
mlzxy / tune_cifar.py
Created April 28, 2018 04:20 — forked from zhreshold/tune_cifar.py
Cifar10 with gluon model
import argparse
import logging
import random
import time
import mxnet as mx
from mxnet import nd
from mxnet import image
from mxnet import gluon
from mxnet import autograd
import numpy as np
@mlzxy
mlzxy / train_imagenet.py
Created April 28, 2018 04:19 — forked from zhreshold/train_imagenet.py
Train imagenet using gluon
import argparse, time
import logging
logging.basicConfig(level=logging.INFO)
fh = logging.FileHandler('training.log')
logger = logging.getLogger()
logger.addHandler(fh)
import mxnet as mx
from mxnet import gluon
from mxnet.gluon import nn
array=("human-computer-interaction" "design-principles" "social-computing" "interaction-techniques" "user-research" "infodesign" "designexperiments" "interaction-design-capstone")
for i in ${array[@]}; do
../coursera-dl -n ~/.netrc ${i}
done
echo "download finished"
@mlzxy
mlzxy / gru.py
Created November 15, 2017 02:39 — forked from danijar/gru.py
Gated Recurrent Unit with Layer norm and Xavier initializer
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import tensorflow as tf
class GRU(tf.contrib.rnn.RNNCell):
def __init__(
@mlzxy
mlzxy / inception_resnet_v2_train_val_2ndtry.txt
Created July 29, 2017 13:00 — forked from revilokeb/inception_resnet_v2_train_val_2ndtry.txt
Caffe train_val for learning inception-resnet-v2 - 2ndtry
name: "Inception_Resnet2_Imagenet"
layer {
name: "data"
type: "Data"
top: "data"
top: "label"
include {
phase: TRAIN
}
transform_param {