Skip to content

Instantly share code, notes, and snippets.

@bohanli
bohanli / st-gumbel.py
Created Jul 31, 2019 — forked from yzh119/st-gumbel.py
ST-Gumbel-Softmax-Pytorch
View st-gumbel.py
from __future__ import print_function
import torch
import torch.nn as nn
import torch.nn.functional as F
from torch.autograd import Variable
def sample_gumbel(shape, eps=1e-20):
U = torch.rand(shape).cuda()
return -Variable(torch.log(-torch.log(U + eps) + eps))
@bohanli
bohanli / tmux-cheatsheet.markdown
Created Jul 31, 2019 — forked from ryerh/tmux-cheatsheet.markdown
Tmux 快捷键 & 速查表
View tmux-cheatsheet.markdown

注意:本文内容适用于 Tmux 2.3 及以上的版本,但是绝大部分的特性低版本也都适用,鼠标支持、VI 模式、插件管理在低版本可能会与本文不兼容。

Tmux 快捷键 & 速查表

启动新会话:

tmux [new -s 会话名 -n 窗口名]

恢复会话:

@bohanli
bohanli / masked_cross_entropy.py
Created Nov 3, 2017 — forked from jihunchoi/masked_cross_entropy.py
PyTorch workaround for masking cross entropy loss
View masked_cross_entropy.py
def _sequence_mask(sequence_length, max_len=None):
if max_len is None:
max_len = sequence_length.data.max()
batch_size = sequence_length.size(0)
seq_range = torch.range(0, max_len - 1).long()
seq_range_expand = seq_range.unsqueeze(0).expand(batch_size, max_len)
seq_range_expand = Variable(seq_range_expand)
if sequence_length.is_cuda:
seq_range_expand = seq_range_expand.cuda()
seq_length_expand = (sequence_length.unsqueeze(1)
View keybase.md

Keybase proof

I hereby claim:

  • I am bohanli on github.
  • I am bohan (https://keybase.io/bohan) on keybase.
  • I have a public key whose fingerprint is 05B6 8943 689C BAE7 784C F8E4 B22B 8EE6 08D9 A103

To claim this, I am signing this object: