Skip to content

Instantly share code, notes, and snippets.

View Hanrui-Wang's full-sized avatar
🎯
Focusing

Hanrui Wang Hanrui-Wang

🎯
Focusing
View GitHub Profile
@Hanrui-Wang
Hanrui-Wang / get_nonzero_index.py
Created September 23, 2019 20:48
get_nonzero_index in pytorch
mask_i = (target >= start) & (target < end)
indices_i = mask_i.nonzero().squeeze()
@Hanrui-Wang
Hanrui-Wang / pytorch_latency_measure.py
Created September 16, 2019 22:53
measure the latency of pytorch program execution
start = torch.cuda.Event(enable_timing=True)
end = torch.cuda.Event(enable_timing=True)
start.record()
z = x + y
end.record()
# Waits for everything to finish running
torch.cuda.synchronize()
@Hanrui-Wang
Hanrui-Wang / sysargv.py
Created September 16, 2019 15:15
get from sys argv
import sys
def main(arg):
print(arg)
if __name__ == '__main__':
main(sys.argv[1:])
@Hanrui-Wang
Hanrui-Wang / overload
Created August 14, 2019 15:21
About overload an existing standard package
$ tree foo
foo
├── string.py
└── test.py
0 directories, 2 files
$ cat foo/string.py
ascii_lowercase='this is personal module'
@Hanrui-Wang
Hanrui-Wang / digits_to_onehot.py
Created August 13, 2019 19:32
convert a digits to one hot
print(labels.detach().numpy())
labels.unsqueeze_(-1)
print(labels.detach().numpy())
labels_onehot = torch.FloatTensor(batch_size, 10)
labels_onehot.zero_()
labels_onehot.scatter_(1, labels, 1)
print(labels_onehot.detach().numpy())
@Hanrui-Wang
Hanrui-Wang / TBPTT.py
Created July 25, 2019 02:32
truncated BPTT
# Truncated backpropagation
def detach(states):
return [state.detach() for state in states]
# Train the model
for epoch in range(num_epochs):
# Set initial hidden and cell states
states = (torch.zeros(num_layers, batch_size, hidden_size).to(device),
torch.zeros(num_layers, batch_size, hidden_size).to(device))
@Hanrui-Wang
Hanrui-Wang / readfile.py
Created July 25, 2019 01:39
read file line by line
with open(path, 'r') as f:
for line in f:
# no need to lines = f.read().split('\n')[:-1]
@Hanrui-Wang
Hanrui-Wang / model_test_th.py
Created July 25, 2019 01:15
how to model eval and torch no_grad in the test set
# Test the model
model.eval() # eval mode (batchnorm uses moving mean/variance instead of mini-batch mean/variance)
with torch.no_grad():
correct = 0
total = 0
for images, labels in test_loader:
images = images.to(device)
labels = labels.to(device)
outputs = model(images)
_, predicted = torch.max(outputs.data, 1)
@Hanrui-Wang
Hanrui-Wang / formatter.py
Created July 18, 2019 00:42
make full use of .format
print ('Epoch [{0:<10d}/{1:>10d}], Loss: {2:=^20.4f}'.format(epoch+1, num_epochs, loss.item()))
# Epoch [10 / 60], Loss: =======2.5578=======
@Hanrui-Wang
Hanrui-Wang / tensor_from_numpy_diff.py
Created July 17, 2019 23:50
difference between torch.Tensor and torch.from_numpy()
a = np.arange(10)
ft = torch.Tensor(a) # same as torch.FloatTensor
it = torch.from_numpy(a)
a.dtype # == dtype('int64')
ft.dtype # == torch.float32
it.dtype # == torch.int64