Skip to content

Instantly share code, notes, and snippets.

View diggerdu's full-sized avatar
🍃
May you find your worth in the waking world

Xingjian Du diggerdu

🍃
May you find your worth in the waking world
View GitHub Profile
@guysmoilov
guysmoilov / Git pre-commit hook for large files.md
Last active May 4, 2025 05:31
Git pre-commit hook for large files

Git pre-commit hook for large files

This hook warns you before you accidentally commit large files to git. It's very hard to reverse such an accidental commit, so it's better to prevent it in advance.

Since you will likely want this script to run in all your git repos, a script is attached to add this hook to all git repos you create / clone in the future.

Of course, you can just download it directly to the hooks in an existing git repo.

If you find this script useful, you might enjoy our more heavy-duty project FastDS, which aims to make it easier to work with versioning in data science projects.

@mingfeima
mingfeima / pytorch_cpu_perf_bkm.md
Last active September 6, 2024 01:40
BKM for PyTorch CPU Performance

General guidelines for CPU performance on PyTorch

This file serves a BKM to get better performance on CPU for PyTorch, mostly focusing on inference or deployment. Chinese version available here.

1. Use channels last memory format

Right now, on PyTorch CPU path, you may choose to use 3 types of memory formats.

  • torch.contiguous_format: default memory format, also referred as NHCW.
  • torch.channels_last: also referred as NHWC.
  • torch._mkldnn: mkldnn blocked format.
@keunwoochoi
keunwoochoi / pseudo_cqt_pytorch.py
Last active March 22, 2023 15:08
To compute pseudo CQT (Constant-Q-transform using STFT) on Pytorch.
cqt_filter_fft = librosa.constantq.__cqt_filter_fft
class PseudoCqt():
"""A class to compute pseudo-CQT with Pytorch.
Written by Keunwoo Choi
API (+implementations) follows librosa (https://librosa.github.io/librosa/generated/librosa.core.pseudo_cqt.html)
Usage:
src, _ = librosa.load(filename)
src_tensor = torch.tensor(src)
@danijar
danijar / blog_tensorflow_scope_decorator.py
Last active January 17, 2023 01:58
TensorFlow Scope Decorator
# Working example for my blog post at:
# https://danijar.github.io/structuring-your-tensorflow-models
import functools
import tensorflow as tf
from tensorflow.examples.tutorials.mnist import input_data
def doublewrap(function):
"""
A decorator decorator, allowing to use the decorator to be used without
@shagunsodhani
shagunsodhani / Batch Normalization.md
Last active July 25, 2023 18:07
Notes for "Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift" paper

The Batch Normalization paper describes a method to address the various issues related to training of Deep Neural Networks. It makes normalization a part of the architecture itself and reports significant improvements in terms of the number of iterations required to train the network.

Issues With Training Deep Neural Networks

Internal Covariate shift

Covariate shift refers to the change in the input distribution to a learning system. In the case of deep networks, the input to each layer is affected by parameters in all the input layers. So even small changes to the network get amplified down the network. This leads to change in the input distribution to internal layers of the deep network and is known as internal covariate shift.

It is well established that networks converge faster if the inputs have been whitened (ie zero mean, unit variances) and are uncorrelated and internal covariate shift leads to just the opposite.

Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
@bshillingford
bshillingford / arxiv2kindle.ipynb
Last active March 1, 2024 12:50
arxiv2kindle: recompiles an arxiv paper for kindle-sized screens, and sends it to your wifi-enabled kindle
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.