Skip to content

Instantly share code, notes, and snippets.

@drdaxxy
Created July 4, 2015 18:25
Show Gist options
  • Save drdaxxy/5c220774ccd8df534573 to your computer and use it in GitHub Desktop.
Save drdaxxy/5c220774ccd8df534573 to your computer and use it in GitHub Desktop.
input_img = PIL.Image.open('/home/niklas/interstellar/0465.png')
output_img = input_img
for i in xrange(465, 1100):
old_input_img = input_img
input_img = PIL.Image.open('/home/niklas/interstellar/%04d.png' % i)
output_img = Image.composite(output_img, input_img, ImageEnhance.Brightness(ImageOps.grayscale(compare_images(old_input_img, input_img, 10))).enhance(0.1))
frame = deepdream(net, np.float32(output_img), end='pool5', iter_n=10, octave_n=2)
output_img = PIL.Image.fromarray(np.uint8(frame))
output_img.save('/home/niklas/interstellar/out/%04d.png' % i)
This blends old post-dream frames with new pre-dream frames using the changed pixels between old and new pre-dream frames as an alpha mask. This means there's lots of blending (and stability) in low-motion areas, and little to no blending in high-motion areas.
Video: http://gfycat.com/AccomplishedNeatFlyingfish
(Video source is Interstellar IMAX Blu-ray)
compare_images is from motion.py - http://bogdanmarian.com/motion/
I use the VGG-F network, simply because it's much faster than GoogLeNet.
I removed jitter from my make_step(), otherwise everything is as in the original notebook.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment