Skip to content

Instantly share code, notes, and snippets.

@skeeet
skeeet / lap_pyramid_loss.py
Created Nov 27, 2020 — forked from alper111/lap_pyramid_loss.py
PyTorch implementation of Laplacian pyramid loss
View lap_pyramid_loss.py
import torch
def gauss_kernel(size=5, device=torch.device('cpu'), channels=3):
kernel = torch.tensor([[1., 4., 6., 4., 1],
[4., 16., 24., 16., 4.],
[6., 24., 36., 24., 6.],
[4., 16., 24., 16., 4.],
[1., 4., 6., 4., 1.]])
kernel /= 256.
kernel = kernel.repeat(channels, 1, 1, 1)
@skeeet
skeeet / mlSpeedTests.swift
Created May 19, 2020 — forked from akirasosa/mlSpeedTests.swift
Benchmark Core ML model in iOS.
View mlSpeedTests.swift
import CoreML
import XCTest
@testable import mlsample
class mlsampleTests: XCTestCase {
override func setUp() {
super.setUp()
}
@skeeet
skeeet / README.md
Created Apr 27, 2020 — forked from smoser/README.md
Boot a specific installed Ubuntu kernel using grub-reboot and grub-set-default
View README.md

Ubuntu Grub Boot Kernel

Boot a specific installed Ubuntu kernel using grub-reboot and grub-set-default.

This allows you to pick what kernel you want to boot on next reboot, or set the default, without having to know much about how grub works or editing config files.

Usage

  Usage: boot-kernel [options] [kernel]
     call grub-reboot or grub-set-default to boot the provided kernel.
@skeeet
skeeet / .tmux.conf
Created Nov 3, 2019 — forked from paulodeleo/.tmux.conf
Tmux configuration to enable mouse scroll and mouse panel select, taken from: http://brainscraps.wikia.com/wiki/Extreme_Multitasking_with_tmux_and_PuTTY
View .tmux.conf
# Make mouse useful in copy mode
setw -g mode-mouse on
# Allow mouse to select which pane to use
set -g mouse-select-pane on
# Allow mouse dragging to resize panes
set -g mouse-resize-pane on
# Allow mouse to select windows
@skeeet
skeeet / xcode_ramdisk.sh
Created Apr 12, 2012 — forked from MaximKeegan/xcode_ramdisk.sh
Create a RAM disk for using with XCode
View xcode_ramdisk.sh
#!/bin/sh
# Create a RAM disk with same perms as mountpoint
# Script based on http://itux.idev.pro/2012/04/iservice-speed-up-your-xcode-%D0%BD%D0%B5%D0%BA%D0%BE%D1%82%D0%BE%D1%80%D1%8B%D0%B5-%D1%81%D0%BF%D0%BE%D1%81%D0%BE%D0%B1%D1%8B/ with some additions
# Usage: sudo ./xcode_ramdisk.sh start
USERNAME=$(logname)
TMP_DIR="/private/tmp"
RUN_DIR="/var/run"
SYS_CACHES_DIR="/Library/Caches"
@skeeet
skeeet / CMakeLists.txt
Created Apr 1, 2019 — forked from zeryx/CMakeLists.txt
minimal pytorch 1.0 pytorch -> C++ full example demo image at: https://i.imgur.com/hiWRITj.jpg
View CMakeLists.txt
cmake_minimum_required(VERSION 3.0 FATAL_ERROR)
project(cpp_shim)
set(CMAKE_PREFIX_PATH ../libtorch)
find_package(Torch REQUIRED)
find_package(OpenCV REQUIRED)
add_executable(testing main.cpp)
message(STATUS "OpenCV library status:")
message(STATUS " config: ${OpenCV_DIR}")
@skeeet
skeeet / danbooru_faces.md
Created Feb 20, 2019 — forked from stormraiser/danbooru_faces.md
Danbooru Faces dataset
View danbooru_faces.md

Danbooru Faces v0.1

Discription

This dataset contains ~443k anime face images of size 256x256 drawn by ~7,000 artists, obtained from Danbooru

Collection

We first downloaded JSON files of all existing posts numbered from 1 to 2,800,000 using their API. We filtered the posts by the following criteria:

@skeeet
skeeet / infinite_dataloader.py
Created Feb 20, 2019 — forked from MFreidank/infinite_dataloader.py
A pytorch DataLoader that generates an unbounded/infinite number of minibatches from the dataset.
View infinite_dataloader.py
from torch.utils.data import DataLoader
class InfiniteDataLoader(DataLoader):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
# Initialize an iterator over the dataset.
self.dataset_iterator = super().__iter__()
def __iter__(self):
@skeeet
skeeet / residual_lstm_keras.py
Created May 5, 2017 — forked from bzamecnik/model_summary.txt
Residual LSTM in Keras
View residual_lstm_keras.py
def make_residual_lstm_layers(input, rnn_depth, rnn_dropout):
"""
The intermediate LSTM layers return sequences, while the last returns a single element.
The input is also a sequence. In order to match the shape of input and output of the LSTM
to sum them we can do it only for all layers but the last.
"""
for i in range(rnn_depth):
return_sequences = i < rnn_depth - 1
x_rnn = LSTM(rnn_width, dropout_W=rnn_dropout, dropout_U=rnn_dropout, return_sequences=return_sequences)(input)
if return_sequences:
@skeeet
skeeet / pytorch_bilinear_interpolation.md
Created Oct 6, 2018 — forked from peteflorence/pytorch_bilinear_interpolation.md
Bilinear interpolation in PyTorch, and benchmarking vs. numpy
View pytorch_bilinear_interpolation.md

Here's a simple implementation of bilinear interpolation on tensors using PyTorch.

I wrote this up since I ended up learning a lot about options for interpolation in both the numpy and PyTorch ecosystems. More generally than just interpolation, too, it's also a nice case study in how PyTorch magically can put very numpy-like code on the GPU (and by the way, do autodiff for you too).

For interpolation in PyTorch, this open issue calls for more interpolation features. There is now a nn.functional.grid_sample() feature but at least at first this didn't look like what I needed (but we'll come back to this later).

In particular I wanted to take an image, W x H x C, and sample it many times at different random locations. Note also that this is different than upsampling which exhaustively samples and also doesn't give us fle