Skip to content

Instantly share code, notes, and snippets.

View jlebar's full-sized avatar

Justin Lebar jlebar

View GitHub Profile
@jlebar
jlebar / gist:4731401
Created February 7, 2013 14:53
Convert hg patches to git patches. curl -L hg_patch | this-script | git am
#!/usr/bin/env python
r'''Convert an hg-exported patch to a patch suitable for use by git am.
>>> hg_patch_to_git_patch(StringIO('# HG changeset patch\n# User Foo <foo@bar.com>\n# Node ID deadbeef\n# Parent cafebabe\nCommit\n\nMsg\n\ndiff -\ndiffdiff'))
From: Foo <foo@bar.com>
Subject: Commit
<BLANKLINE>
Msg
<BLANKLINE>
@jlebar
jlebar / gist:5367521
Last active December 16, 2015 03:09
Spawns one thread, then does a series of computations on the main thread and on the spawned thread. Periodically outputs the speed of those two computations.
// Compile me with -lrt -lpthread -lm.
//
// To force this program to run on one core, use |taskset 1|.
//
// See post on http://jlebar.com for why this is relevant.
#include <pthread.h>
#include <math.h>
#include <stdio.h>
#include <unistd.h>
@jlebar
jlebar / reorder-patch
Created August 31, 2012 01:49
Reorder a patch file so it's easier to review
#!/usr/bin/env python
"""Re-order the files in a patch according to a set of rules.
Input is accepted on stdin, and written to stdout.
Usage: cat patch | reorder-patch > reordered-patch
"""
@jlebar
jlebar / tf_computation.py
Created November 7, 2018 01:17
A simple TensorFlow computation
def model_fn(x, y, z):
return tf.reduce_sum(x + y * z)
@jlebar
jlebar / xla_compile.py
Created November 7, 2018 01:18
Simple example of using xla.compile
from tensorflow.contrib.compiler import xla
def model_fn(x, y, z):
return tf.reduce_sum(x + y * z)
def create_and_run_graph():
with tf.Session() as sess:
x = tf.placeholder(tf.float32, name='x')
y = tf.placeholder(tf.float32, name='y')
z = tf.placeholder(tf.float32, name='z')
@jlebar
jlebar / xla_compile_switch.py
Created November 7, 2018 01:19
Example of switching xla.compile on or off via a flag
if should_use_xla():
result = xla.compile(model_fn, (x, y, z))[0]
else:
result = model_fn(x, y, z)
@jlebar
jlebar / uninferrable_shapes.py
Created November 7, 2018 01:20
Example of TF functions with uninferrable shapes
def model_fn_random_shape():
random_dim_size = tf.random_uniform(
shape=[], minval=0, maxval=5, dtype=tf.int32)
# Return a vector with a random number of elements, all of them 42.0
return tf.fill([random_dim_size], 42.)
def run_random_shapes_model():
with tf.Session() as sess:
x = tf.placeholder(tf.float32, name='x')
result = xla.compile(model_fn_random_shape)[0]
@jlebar
jlebar / tf_dynamic_shapes.py
Created November 7, 2018 01:21
Example of TF function with dynamic but nonetheless inferrable shapes
def model_fn_changing_shapes(x):
return 2 * x
def run_changing_shapes_model():
with tf.Session() as sess:
x = tf.placeholder(tf.float32, name='x')
result = xla.compile(model_fn_changing_shapes, (x,))[0]
a = sess.run(result, feed_dict={x: [1., 2.]})
b = sess.run(result, feed_dict={x: [1., 2., 3.]})
@jlebar
jlebar / unsupported_op.py
Created November 7, 2018 01:21
Example of TF op that's not supported by XLA
def model_fn_unsupported_op(x):
return tf.where(tf.cast(x, tf.bool))
@jlebar
jlebar / create_gce_instance.sh
Created November 14, 2018 21:27
Create a GCE instance with 8 V100 GPUs
export INSTANCE_NAME="xla-benchmark-8xV100"
export IMAGE_FAMILY="tf-1-12-cu100"
export PROJECT_NAME="<your project name>"
gcloud beta compute instances create $INSTANCE_NAME \
--project=$PROJECT_NAME \
--machine-type=n1-standard-64 \
--maintenance-policy=TERMINATE \
--accelerator=type=nvidia-tesla-v100,count=8 \
--tags=http-server,https-server \