Skip to content

Instantly share code, notes, and snippets.

View dschaehi's full-sized avatar
🎯
Focusing

Jae Hee Lee dschaehi

🎯
Focusing
View GitHub Profile
@dschaehi
dschaehi / better_bibtex.js
Last active March 26, 2024 09:57
Zotero Better BibLaTeX script for arXiv and conference papers with missing booktitle
if (Translator.BetterBibTeX) {
if (tex.has.doi) {
tex.remove("doi");
}
if (tex.has.url) {
tex.remove("url");
}
if (tex.has.issn) {
tex.remove("issn");
}
@dschaehi
dschaehi / actions-zotero.yml
Last active March 26, 2024 09:56
Actions and Tags for Zotero
type: ActionsTagsBackup
author: jaeheelee
platformVersion: 7.0.0-beta.68+c31a40c74
pluginVersion: 1.0.0-beta.35
timestamp: '2024-03-26T09:55:19.478Z'
actions:
default0:
event: 1
operation: 1
data: /unread
@dschaehi
dschaehi / commenting.sty
Created July 11, 2023 20:46
A LaTeX package for commenting. It allows line breaks and environments inside comments.
\NeedsTeXFormat{LaTeX2e}
\ProvidesPackage{commenting}[2022/10/29 by Jae Hee Lee (http://jaeheelee.de)]
\RequirePackage{hyperref}
\RequirePackage{ifdraft}
\RequirePackage{graphicx}
\RequirePackage{xcolor}
\RequirePackage{xspace}
\RequirePackage{environ}
\RequirePackage{soul}
\RequirePackage{twemojis}
@dschaehi
dschaehi / gradient_accumulation.py
Created September 28, 2022 15:40 — forked from thomwolf/gradient_accumulation.py
PyTorch gradient accumulation training loop
model.zero_grad() # Reset gradients tensors
for i, (inputs, labels) in enumerate(training_set):
predictions = model(inputs) # Forward pass
loss = loss_function(predictions, labels) # Compute loss function
loss = loss / accumulation_steps # Normalize our loss (if averaged)
loss.backward() # Backward pass
if (i+1) % accumulation_steps == 0: # Wait for several backward steps
optimizer.step() # Now we can do an optimizer step
model.zero_grad() # Reset gradients tensors
if (i+1) % evaluation_steps == 0: # Evaluate the model when we...
def flatten_json(json):
if type(json) == dict:
for k, v in list(json.items()):
if type(v) == dict:
flatten_json(v)
json.pop(k)
for k2, v2 in v.items():
json[k+"."+k2] = v2
@dschaehi
dschaehi / hamming_score.py
Created November 6, 2021 10:32
The Hamming score in PyTorch
import torch
def hamming_score(pred, answer):
out = ((pred & answer).sum(dim=1) / (pred | answer).sum(dim=1)).mean()
if out.isnan():
out = torch.tensor(1.0)
return out
answer = torch.tensor([[0, 1, 0], [0, 1, 1], [1, 0, 1], [0, 0, 1]])
@dschaehi
dschaehi / extract_features.py
Last active March 15, 2022 21:27
Extracting ResNet Features Using PyTorch
from collections import OrderedDict
from torchvision import models
def gen_feature_extractor(model, output_layer):
layers = OrderedDict()
for (k, v) in model._modules.items():
layers[k] = v
if k == output_layer:
@dschaehi
dschaehi / emacs.json
Created March 29, 2020 08:01
Karabiner Elements complex modifications for emacs
{
"title": "Emacs keys",
"rules": [
{
"description": "Change right_option + a to right_control + a",
"manipulators": [
{
"type": "basic",
"from": {
"key_code": "a",
@dschaehi
dschaehi / install-cuda-10-bionic.sh
Created August 10, 2019 13:11 — forked from bogdan-kulynych/install-cuda-10-bionic.sh
Install CUDA 10 on Ubuntu 18.04
#!/bin/bash
# Install CUDA Toolkit 10
wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64/cuda-repo-ubuntu1804_10.0.130-1_amd64.deb
sudo apt-key adv --fetch-keys https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64/7fa2af80.pub && sudo apt update
sudo dpkg -i cuda-repo-ubuntu1804_10.0.130-1_amd64.deb
sudo apt update
sudo apt install -y cuda
@dschaehi
dschaehi / jupyterlab_shortcuts.json
Last active June 17, 2022 06:17
Jupyter Lab Keyborad Shortcuts
{
"shortcuts": [
{
"command": "notebook:toggle-all-cell-line-numbers",
"keys": [
"Alt L"
],
"selector": ".jp-Notebook:focus"
},
// Moving cells