Skip to content

Instantly share code, notes, and snippets.

-- modified version of @jcjohnson's neural-style
-- by @htoyryla 13 Feb 2018
-- allows giving emphasis to nc best channel(s) in each style layer
-- use -style_layers to select layer as usual, using a single layer is recommended
-- -nc to set how many of the best channels are used per layer
-- during target capture, tests the model using the style image
-- and selects nc channels with strongest activations to be given emphasis during iterations
-- not tested with multiple style images
require 'torch'
# git clone from https://github.com/tkarras/progressive_growing_of_gans
# download the snapshot from their Google drive
# use the following code in the same directory to generate random faces
import os
import sys
import time
import glob
import shutil
import operator
import theano
-- modified version of @jcjohnson's neural-style
-- by @htoyryla 14 Feb 2018
-- allows exploring the effect of equalizing the activations of different channels in a style layer
require 'torch'
require 'nn'
require 'image'
require 'optim'
-- this program takes in an image
-- and finds nc channels on a given layer
-- having the strongest activations
require 'torch'
require 'nn'
require 'image'
require 'loadcaffe'
function preprocess(img)
@htoyryla
htoyryla / neural-mean2.lua
Last active October 27, 2017 09:13
Neural-style with additional loss through spatial mean of feature maps
-- neural-style modified by @htoyryla 27 Oct 2017
-- to use both gram matrix (as a statistical evaluation) and a mean of n featuremaps (to retain spatial sturcture of the style image)
-- to evaluate style
-- StyleLoss and GramMatrix have been copied from fast-neural-style
-- but modified to calculate a spatial mean of the feature maps
-- new param mean_weight to control the amount of mean for style
-- loss_type to select between L2 and SmoothL1 (as in fast-neural-style)
-- **** style_scale no longer works, the mean calculation requires that style image is resized as the content image ****
@htoyryla
htoyryla / neural_grad.lua
Created October 24, 2017 09:41
neural-style modified not to decrease style gradients when image size is increased
-- this is a modified version of neural_style.lua originally by @jcjohnson
-- @htoyryla 24 Oct 2017
--
-- original neural-style includes scaling of gram matrix output and gradient in the StyleLoss
-- using division by the number of elements in input
-- this is done to make the style representations from different layers (of different size) better comparable
-- but it also has the effect of decreasing gradients as the image size is increased
--
-- this version attempts to keep the gradients from style loss modules in the same range
-- (i.e. that what is obtained at image size 512px)
@htoyryla
htoyryla / neural-channels.lua
Last active May 18, 2017 22:16
Neural-style with selected channels emphasis
-- modified version of @jcjohnson's neural-style
-- by @htoyryla 14 May 2017
-- allows exploring th effect of giving emphasis to specific channel(s) in a style layer
-- use -style_layers to select layer as usual, using a single layer is recommended
-- -style_channels to specify one or more channels to be given emphasis, other channels will be attenuated
-- if a channel does not exist in a layer, it is ignored and given a warning
-- note: if multiple style layers are used, style_channels setting affects both
require 'torch'
require 'nn'
@htoyryla
htoyryla / nsrg-replace.lua
Created May 13, 2017 10:39
Varying style neural-style transfer by modifying gram matrix (v2 fantasy)
-- neural-style by jcjohnson modified by @htoyryla
-- 13 May 2017
-- generate style variants by modifying style target Gram matrix with randomized eigenvalues
-- see function randomizeEigenvalues() for details
require 'torch'
require 'nn'
require 'image'
require 'optim'
@htoyryla
htoyryla / nsrg.lua
Created May 13, 2017 10:35
Varying style neural-style transfer by modifying gram matrix (v1)
-- neural-style by jcjohnson modified by @htoyryla
-- 13 May 2017
-- generate style variants by modifying style target Gram matrix with randomized eigenvalues
-- see function randomizeEigenvalues() for details
-- this version produces slight variants of a given style
-- by multiplying gram matrix eigenvalues by random multiplies 0.1 to 1.9
require 'torch'
require 'nn'
@htoyryla
htoyryla / neural-delta2.lua
Created September 13, 2016 19:41
Neural-style modified according to the delta gram matrix method as described in http://arxiv.org/abs/1606.01286v1
--
-- Neural-style modified by Hannu Töyrylä 13 Sep 2016
-- according to the delta gram matrix method
-- as described in http://arxiv.org/abs/1606.01286v1
--
-- Install and use just like http://github.com/jcjohnson/neural-style/
-- except for the additional parameter delta (use values 0 to 16, too large values can result in error)
--
require 'torch'
require 'nn'