Skip to content

Instantly share code, notes, and snippets.

View kaniblu's full-sized avatar

Kang Min Yoo kaniblu

  • NAVER Corporation
  • Seoul
View GitHub Profile
@kaniblu
kaniblu / gmail.py
Last active September 16, 2019 13:56
Send gmails using command-line
#!/usr/bin/env python
import io
import sys
import yaml
import pathlib
import argparse
import smtplib
import getpass
@kaniblu
kaniblu / benchmark
Created August 7, 2019 12:18
A simple script for benchmarking disk I/O speeds. Run this script with `sudo` at a directory that resides in the disk.
#!/bin/bash
if [ "$EUID" -ne 0 ]
then
echo "root privilege is required. re-run this script with 'sudo'." >&2
exit 1
fi
TEMP_PATH=$(tempfile -d $(pwd))
@kaniblu
kaniblu / gumbel.py
Last active May 26, 2019 10:46
Sampling from the Gumbel-softmax distribution
import torch
def gumbel_softmax(logits, tau=1.0, eps=1e-10):
"""Generate samples from the Gumbel-softmax distribution.
(arXiv: 1611.01144)
Examples:
>>> # sampling from a Gumbel-softmax distribution given a categorical distribution
>>> gumbel_softmax(torch.tensor([0.3, 0.7]).log(), tau=0.1)
@kaniblu
kaniblu / sparse.py
Created February 14, 2019 08:03
Some sparse operators for 2D `torch.sparse` Tensors
import torch.sparse as sp
def sparse_2d_densesum(x, dim=None)
assert len(x.size()) == 2
if dim is None:
return x.values().sum()
values = x.values()
return values.new(x.size(1 - dim)).zero_() \
.scatter_add(0, x.indices()[1 - dim], values)
@kaniblu
kaniblu / interfaces
Created November 29, 2018 11:39
/etc/network/interfaces template for static IP and MAC Address spoofing
auto eno1
iface eno1 inet static
address 147.46.15.255
hwaddress ether 00:50:56:00:00:00
gateway 147.46.15.1
netmask 255.255.255.0
dns-nameservers 147.46.37.10 147.46.80.1 8.8.8.8
@kaniblu
kaniblu / masked_softmax.py
Last active September 1, 2021 13:19
Masked Softmax in PyTorch
import torch
import torch.nn as nn
class MaskedSoftmax(nn.Module):
def __init__(self):
super(MaskedSoftmax, self).__init__()
self.softmax = nn.Softmax(1)
def forward(self, x, mask=None):
@kaniblu
kaniblu / map_async.py
Last active February 22, 2019 04:00
Using multi-processing with progress bar
import itertools
import multiprocessing
import multiprocessing.pool as mp
import tqdm
# gensim.utils.chunkize_serial
def chunkize_serial(iterable, chunksize, as_numpy=False):
"""
@kaniblu
kaniblu / rnn_init.py
Created October 26, 2017 05:14
PyTorch LSTM and GRU Orthogonal Initialization and Positive Bias
def init_gru(cell, gain=1):
cell.reset_parameters()
# orthogonal initialization of recurrent weights
for _, hh, _, _ in cell.all_weights:
for i in range(0, hh.size(0), cell.hidden_size):
I.orthogonal(hh[i:i + cell.hidden_size], gain=gain)
def init_lstm(cell, gain=1):