Skip to content

Instantly share code, notes, and snippets.

@SubhadityaMukherjee
Created January 21, 2020 16:38
Show Gist options
  • Save SubhadityaMukherjee/851e38d478ce3255b926f5cc586c9b99 to your computer and use it in GitHub Desktop.
Save SubhadityaMukherjee/851e38d478ce3255b926f5cc586c9b99 to your computer and use it in GitHub Desktop.
simple deep dream

main class simple deepdream

  • the @tf.function allows the function to be precompiled. Since it is compiled, it runs faster
  • tensorspec basically allows us to pre define the shapes of specific arrays as we are pre compiling it

call

  • here we are trying to find the gradients of the image
  • this method is called gradient ascent. This adds the gradients found in every layer to the image and thus increases the activations at that point as well which is what we want
  • GradientTape allows us to keep a sort of history of all the gradients and allows us to use it to calculate loss directly from the history
  • After we get the gradients, we normalize them
  • img = img + gradients * step_size is the main ascent function which maximizes the loss
  • the clip value function here is used to scale all numbers to -1 or 1. Any values less than -1 is set to 1 and greater than 1 is set to 1. (You can say its another form of normalization)
class DeepDream(tf.Module):
    def __init__(self, model):
        self.model = model

    @tf.function(input_signature=(
        tf.TensorSpec(shape=[None, None, 3], dtype=tf.float32),
        tf.TensorSpec(shape=[], dtype=tf.int32),
        tf.TensorSpec(shape=[], dtype=tf.float32),
    ))
    def __call__(self, img, steps, step_size):
        print("Tracing")
        loss = tf.constant(0.0)
        for n in tf.range(steps):
            with tf.GradientTape() as tape:
                tape.watch(img)
                loss = calc_loss(img, self.model)

            gradients = tape.gradient(loss, img)

            gradients /= tf.math.reduce_std(gradients) + 1e-8

            img = img + gradients * step_size
            img = tf.clip_by_value(img, -1, 1)

        return loss, img

deepdream = DeepDream(dream_model)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment