Skip to content

Instantly share code, notes, and snippets.

@skeeet
skeeet / danbooru_faces.md
Created February 20, 2019 11:31 — forked from stormraiser/danbooru_faces.md
Danbooru Faces dataset

Danbooru Faces v0.1

Discription

This dataset contains ~443k anime face images of size 256x256 drawn by ~7,000 artists, obtained from Danbooru

Collection

We first downloaded JSON files of all existing posts numbered from 1 to 2,800,000 using their API. We filtered the posts by the following criteria:

@skeeet
skeeet / infinite_dataloader.py
Created February 20, 2019 10:41 — forked from MFreidank/infinite_dataloader.py
A pytorch DataLoader that generates an unbounded/infinite number of minibatches from the dataset.
from torch.utils.data import DataLoader
class InfiniteDataLoader(DataLoader):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
# Initialize an iterator over the dataset.
self.dataset_iterator = super().__iter__()
def __iter__(self):
@skeeet
skeeet / residual_lstm_keras.py
Created May 5, 2017 05:46 — forked from bzamecnik/model_summary.txt
Residual LSTM in Keras
def make_residual_lstm_layers(input, rnn_depth, rnn_dropout):
"""
The intermediate LSTM layers return sequences, while the last returns a single element.
The input is also a sequence. In order to match the shape of input and output of the LSTM
to sum them we can do it only for all layers but the last.
"""
for i in range(rnn_depth):
return_sequences = i < rnn_depth - 1
x_rnn = LSTM(rnn_width, dropout_W=rnn_dropout, dropout_U=rnn_dropout, return_sequences=return_sequences)(input)
if return_sequences:
@skeeet
skeeet / pytorch_bilinear_interpolation.md
Created October 6, 2018 09:22 — forked from peteflorence/pytorch_bilinear_interpolation.md
Bilinear interpolation in PyTorch, and benchmarking vs. numpy

Here's a simple implementation of bilinear interpolation on tensors using PyTorch.

I wrote this up since I ended up learning a lot about options for interpolation in both the numpy and PyTorch ecosystems. More generally than just interpolation, too, it's also a nice case study in how PyTorch magically can put very numpy-like code on the GPU (and by the way, do autodiff for you too).

For interpolation in PyTorch, this open issue calls for more interpolation features. There is now a nn.functional.grid_sample() feature but at least at first this didn't look like what I needed (but we'll come back to this later).

In particular I wanted to take an image, W x H x C, and sample it many times at different random locations. Note also that this is different than upsampling which exhaustively samples and also doesn't give us fle

@skeeet
skeeet / std.cpp
Created April 19, 2018 09:35 — forked from mahuna13/std.cpp
standard library functions for Halide
#include "std_try.h"
#include <math.h>
using namespace Halide;
#define PI 3.14159
/*
Interpolations
*/
@skeeet
skeeet / GeneticAlgorithm.swift
Created April 9, 2018 19:57 — forked from tombaranowicz/GeneticAlgorithm.swift
Simple Starter for experiments with Genetic Algorithms in Swift
//: Simple Genetic Algorithm Starter in Swift 3
import UIKit
import Foundation
let AVAILABLE_GENES:[Int] = Array(1...100)
let DNA_LENGTH = 6
let TOURNAMENT_SIZE = 5
let MAX_GENERATIONS_COUNT = 100
import torch
from torch import nn
from torch.autograd import Variable
import torch.nn.functional as F
class RNN(nn.Module):
def __init__(self, input_size, hidden_size, output_size, n_layers=1):
super(RNN, self).__init__()
self.input_size = input_size
self.hidden_size = hidden_size
@skeeet
skeeet / neural.c
Created January 9, 2018 14:30 — forked from hollance/neural.c
Playing with BNNS on macOS 10.12. The "hello world" of neural networks.
/*
The "hello world" of neural networks: a simple 3-layer feed-forward
network that implements an XOR logic gate.
The first layer is the input layer. It has two neurons a and b, which
are the two inputs to the XOR gate.
The middle layer is the hidden layer. This has two neurons h1, h2 that
will learn what it means to be an XOR gate.
@skeeet
skeeet / build.sh
Created December 7, 2017 11:09 — forked from graetzer/build.sh
PJSIP 2.6 iPhone iOS 9.0 build script
#!/bin/bash
echo "Building pjsip:"
# change this to whatever DEVPATH works
# if you get make errors, maybe redownload pjsip and try again
export DEVPATH=/Applications/Xcode.app/Contents/Developer/Platforms/iPhoneSimulator.platform/Developer
MIN_IOS="-miphoneos-version-min=9.0" ARCH="-arch i386" CFLAGS="-O2 -m32 -mios-simulator-version-min=9.0 -fembed-bitcode" LDFLAGS="-O2 -m32 -mios-simulator-version-min=9.0 -fembed-bitcode" ./configure-iphone
ACTION = build
AD_HOC_CODE_SIGNING_ALLOWED = NO
ALTERNATE_GROUP = staff
ALTERNATE_MODE = u+w,go-w,a+rX
ALTERNATE_OWNER = grantdavis
ALWAYS_SEARCH_USER_PATHS = NO
ALWAYS_USE_SEPARATE_HEADERMAPS = YES
APPLE_INTERNAL_DEVELOPER_DIR = /AppleInternal/Developer
APPLE_INTERNAL_DIR = /AppleInternal
APPLE_INTERNAL_DOCUMENTATION_DIR = /AppleInternal/Documentation