Skip to content

Instantly share code, notes, and snippets.

@skeeet
skeeet / Makefile
Created August 9, 2017 12:04 — forked from figgis/Makefile
ffmpeg qp values
# use pkg-config for getting CFLAGS and LDLIBS
FFMPEG_LIBS= libavdevice \
libavformat \
libavfilter \
libavcodec \
libswresample \
libswscale \
libavutil \
CFLAGS += -Wall -g
@skeeet
skeeet / DSSIM.py
Created June 3, 2017 11:54 — forked from Dref360/DSSIM.py
Difference of stuctural similarity using Tensorflow and keras. Works ONLY on tf >= 0.11
import keras.backend as K
import tensorflow as tf
class Model:
def __init__(self,batch_size):
self.batch_size = batch_size
def loss_DSSIS_tf11(self, y_true, y_pred):
"""Need tf0.11rc to work"""
y_true = tf.reshape(y_true, [self.batch_size] + get_shape(y_pred)[1:])
y_pred = tf.reshape(y_pred, [self.batch_size] + get_shape(y_pred)[1:])
@skeeet
skeeet / libjpeg.sh
Created May 20, 2017 08:06 — forked from eminarcissus/libjpeg.sh
Download & Compile Libjpeg for iOS (all architectures)
# Builds a Libjpeg framework for the iPhone and the iPhone Simulator.
# Creates a set of universal libraries that can be used on an iPhone and in the
# iPhone simulator. Then creates a pseudo-framework to make using libjpeg in Xcode
# less painful.
#
# To configure the script, define:
# IPHONE_SDKVERSION: iPhone SDK version (e.g. 8.1)
#
# Then go get the source tar.bz of the libjpeg you want to build, shove it in the
# same directory as this script, and run "./libjpeg.sh". Grab a cuppa. And voila.
@skeeet
skeeet / rnn_viz_keras.py
Created May 4, 2017 05:56 — forked from tokestermw/rnn_viz_keras.py
Recurrent Neural Network (RNN) visualizations using Keras.
from __future__ import print_function
from keras import backend as K
from keras.engine import Input, Model, InputSpec
from keras.layers import Dense, Activation, Dropout, Lambda
from keras.layers import Embedding, LSTM
from keras.optimizers import Adam
from keras.preprocessing import sequence
from keras.utils.data_utils import get_file
from keras.datasets import imdb
@skeeet
skeeet / AttentionWithContext.py
Created May 4, 2017 05:49 — forked from nigeljyng/AttentionWithContext.py
Keras Layer that implements an Attention mechanism, with a context/query vector, for temporal data. Supports Masking. Follows the work of Yang et al. [https://www.cs.cmu.edu/~diyiy/docs/naacl16.pdf] "Hierarchical Attention Networks for Document Classification"
class AttentionWithContext(Layer):
"""
Attention operation, with a context/query vector, for temporal data.
Supports Masking.
Follows the work of Yang et al. [https://www.cs.cmu.edu/~diyiy/docs/naacl16.pdf]
"Hierarchical Attention Networks for Document Classification"
by using a context vector to assist the attention
# Input shape
3D tensor with shape: `(samples, steps, features)`.
# Output shape
@skeeet
skeeet / keras_attention_wrapper.py
Created April 12, 2017 04:39 — forked from wassname/keras_attention_wrapper.py
A keras attention layer that wraps RNN layers.
"""
A keras attention layer that wraps RNN layers.
Based on tensorflows [attention_decoder](https://github.com/tensorflow/tensorflow/blob/c8a45a8e236776bed1d14fd71f3b6755bd63cc58/tensorflow/python/ops/seq2seq.py#L506)
and [Grammar as a Foreign Language](https://arxiv.org/abs/1412.7449).
date: 20161101
author: wassname
url: https://gist.github.com/wassname/5292f95000e409e239b9dc973295327a
"""
@skeeet
skeeet / fail.py
Created April 3, 2017 10:27 — forked from guicho271828/fail.py
minimal failure cases, only on tensorflow backend
from keras.layers import Input, Dense
from keras.models import Model, Sequential
from keras.datasets import mnist
from keras.layers.normalization import BatchNormalization as BN
autoencoder1 = Sequential([
Dense(128, activation='relu',input_shape=(784,)),
BN(),
Dense(784, activation='relu'),
])
# Example for my blog post at:
# http://danijar.com/introduction-to-recurrent-networks-in-tensorflow/
import functools
import sets
import tensorflow as tf
def lazy_property(function):
attribute = '_' + function.__name__
@skeeet
skeeet / ffmppeg-advanced-playbook-nvenc-and-libav-and-vaapi.md
Created March 24, 2017 11:05 — forked from Brainiarc7/ffmppeg-advanced-playbook-nvenc-and-libav-and-vaapi.md
FFMpeg's playbook: Advanced encoding options with hardware-accelerated acceleration for both NVIDIA NVENC's and Intel's VAAPI-based hardware encoders in both ffmpeg and libav.

FFmpeg and libav's playbook: Advanced encoding options with hardware-based acceleration, NVIDIA's NVENC and Intel's VAAPI-based encoder.

Hello guys,

Continuing from this guide to building ffmpeg and libav with NVENC and VAAPI enabled, this snippet will cover advanced options that you can use with ffmpeg and libav on both NVENC and VAAPI hardware-based encoders.

For ffmpeg:

@skeeet
skeeet / async_worker_pool.py
Created March 15, 2017 10:19 — forked from thehesiod/async_worker_pool.py
Asynchronous Worker Pool
import asyncio
from datetime import datetime, timezone
import os
def utc_now():
# utcnow returns a naive datetime, so we have to set the timezone manually <sigh>
return datetime.utcnow().replace(tzinfo=timezone.utc)
class Terminator:
pass