Skip to content

Instantly share code, notes, and snippets.

@ttur
Last active November 13, 2017 15:21
Show Gist options
  • Save ttur/63490729532ceb81bee7 to your computer and use it in GitHub Desktop.
Save ttur/63490729532ceb81bee7 to your computer and use it in GitHub Desktop.
Instructions on setting up and running neural-style on CentOS and AWS G2, and how to create slideshow videos of the results with ffmpeg

Instructions for neural-style installation and operation

26.11.2015, ttur@futurice.com

See https://github.com/jcjohnson/neural-style for information on what is neural-style

See www.spiceprogram.org/artcorn to see what I've done with it

See this file for the related commands and installation procedures

Installation on AWS

AWS G2 (g2.2xlarge) instance comes with a compatible GPU. It only has 4GB of memory, so you are limited to 512px images :( It's fast though - takes 20-30 minutes to create one.

It's also expensive. Don't forget it running and make sure to set billing alerts. I did not do that, and my family will get beautiful neural-styled pictures for Christmas presents, if I can afford the printing.

Setting up the instance couldn't be much easier, there's an existing Amazon Machine Image that gets you half the way. Just commandeer one of these babies, and you're nearly in business:

    https://console.aws.amazon.com/ec2/v2/home?region=us-east-1#LaunchInstanceWizard:ami=ami-ffba7b94

After you have it running, just log in and here we go:

    luarocks install image
    
    luarocks install loadcaffe
    
    luarocks install torch
    
    LD_LIBRARY_PATH=/home/ubuntu/cudnn-6.5-linux-x64-v2-rc2:/home/ubuntu/torch-distro/install/lib:$LD_LIBRARY_PATH
    export LD_LIBRARY_PATH
    
    git clone https://github.com/jcjohnson/neural-style
    
    cd neural-style/models
    
    chmod +x download_models.sh
    
    ./download_models.sh

DONE!

Installation on CentOS

Get an instance with at least 20GB of memory if you're going to be running on CPU (not GPU).

    curl -s https://raw.githubusercontent.com/torch/ezinstall/master/install-deps | bash
    
    git clone https://github.com/torch/distro.git ~/torch --recursive
    
    cd ~/torch; ./install.sh
    
    source ~/.bashrc
    
    sudo yum install protobuf-devel.x86_64
    
    luarocks install loadcaffe
    
    git clone https://github.com/jcjohnson/neural-style.git
    
    cd neural-style/models
    
    chmod +x download_models.sh
    
    ./downloads_models.sh

DONE!

Operation

One difference between AWS and CentOS. Since my CentOS does not have a GPU compatible with neural-style, I run it on CPU:

    th neural_style.lua -style_image style.jpg -style_weight 100 -num_iterations 1000 -content_image content.jpg -image_size 1024 -output_image result.png -save_iter 5 -gpu -1

That's the -gpu -1 in the spell.

On AWS I use GPU:

    th neural_style.lua -style_image style.jpg -style_weight 100 -num_iterations 1000 -content_image content.jpg -image_size 512 -output_image result.png -save iter 5 -backend cudnn 

Processing the output

With 1000 iterations and save_iter 5 we get 200 output images:

mapcorn_5.png mapcorn_10.png mapcorn_15.png ... mapcorn_990.png mapcorn_995.png mapcorn.png

If the save-iter is something else, the numbering reflects that:

picacorn_6.png picacorn_12.png picacorn_18.png ... picacorn_990.png picacorn_996.png picacorn.png

So there's a sequence (with the last image missing the numbering), but feeding that to ffmpeg doesn't work too well; it does not understand the sequence.

For that purpose I rename the files:

    mv mapcorn.png mapcorn_1000.png
    mv picacorn.png picacorn_1000.png
            
    ls *png > /tmp/list ; seq -w `ls *png | wc -l` | paste /tmp/list - | awk -F\\t '{ print $1, "mapcorn_"$2".png"}' | xargs -n2 mv
            
    ls *png > /tmp/list ; seq -w `ls *png | wc -l` | paste /tmp/list - | awk -F\\t '{ print $1, "picacorn_"$2".png"}' | xargs -n2 mv

As a result we now have a sequence without gaps:

mapcorn_001.png mapcorn_002.png mapcorn_003.png ... mapcorn_198.png mapcorn_199.png mapcorn_200.png picacorn_001.png picacorn_002.png picacorn_003.png ... picacorn_198.png picacorn_199.png picacorn_200.png

This is easy to feed to ffmpeg to create an mp4 video of the process:

    ffmpeg -framerate 10 -i 'mapcorn_%03d.png' -c:v libx264 -r 30 -pix_fmt yuv420p mapcorn.mp4
    ffmpeg -framerate 10 -i 'picacorn_%03d.png' -c:v libx264 -r 30 -pix_fmt yuv420p picacorn.mp4

If you use save_iter 10 instead, you don't need to worry about the filenames as your sequence will be:

rousseaucorn_10.png rousseaucorn_20.png rousseaucorn_30.png ... rousseaucorn_980.png rousseaucorn_990.png rousseaucorn.png

In this case you can simply disregard the trailing 0 and you have a sequence in place for ffmpeg:

    mv rousseaucorn.png rousseaucorn_1000.png
    
    ffmpeg -framerate 10 -i 'rousseaucorn_%03d0.png' -c:v libx264 -r 30 -pix_fmt yuv420p rousseaucorn.mp4

I use framerate 10 in these examples, with 200 frames you get 20 seconds of video.

Most of the visible transformations happen in the first 100 frames.

If you want to change the length of the video after creating it:

    ffmpeg -i mapcorn.mp4 -filter:v "setpts=2*PTS" mapcorn_40sec.mp4

This would double the length, using 0.5*PTS would halve it, etc.

Combining videos is easiest done by creating an order.txt like so:

    file 'rousseaucorn.mp4'
    file 'mapcorn.mp4'

Then running:

    ffmpeg -f concat -i order.txt -c copy twocorn.mp4

If you want to transcode to the WebM format:

    ffmpeg -i picacorn.mp4 -c:v libvpx -crf 10 -b:v 1M -c:a libvorbis picacorn.webm
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment