Navigation Menu

Skip to content

Instantly share code, notes, and snippets.

@kukuruza
Last active March 4, 2021 01:58
Show Gist options
  • Star 55 You must be signed in to star a gist
  • Fork 12 You must be signed in to fork a gist
  • Save kukuruza/03731dc494603ceab0c5 to your computer and use it in GitHub Desktop.
Save kukuruza/03731dc494603ceab0c5 to your computer and use it in GitHub Desktop.
Tensorflow: visualize convolutional filters (conv1) in Cifar10 model
from math import sqrt
def put_kernels_on_grid (kernel, pad = 1):
'''Visualize conv. filters as an image (mostly for the 1st layer).
Arranges filters into a grid, with some paddings between adjacent filters.
Args:
kernel: tensor of shape [Y, X, NumChannels, NumKernels]
pad: number of black pixels around each filter (between them)
Return:
Tensor of shape [1, (Y+2*pad)*grid_Y, (X+2*pad)*grid_X, NumChannels].
'''
# get shape of the grid. NumKernels == grid_Y * grid_X
def factorization(n):
for i in range(int(sqrt(float(n))), 0, -1):
if n % i == 0:
if i == 1: print('Who would enter a prime number of filters')
return (i, int(n / i))
(grid_Y, grid_X) = factorization (kernel.get_shape()[3].value)
print ('grid: %d = (%d, %d)' % (kernel.get_shape()[3].value, grid_Y, grid_X))
x_min = tf.reduce_min(kernel)
x_max = tf.reduce_max(kernel)
kernel = (kernel - x_min) / (x_max - x_min)
# pad X and Y
x = tf.pad(kernel, tf.constant( [[pad,pad],[pad, pad],[0,0],[0,0]] ), mode = 'CONSTANT')
# X and Y dimensions, w.r.t. padding
Y = kernel.get_shape()[0] + 2 * pad
X = kernel.get_shape()[1] + 2 * pad
channels = kernel.get_shape()[2]
# put NumKernels to the 1st dimension
x = tf.transpose(x, (3, 0, 1, 2))
# organize grid on Y axis
x = tf.reshape(x, tf.stack([grid_X, Y * grid_Y, X, channels]))
# switch X and Y axes
x = tf.transpose(x, (0, 2, 1, 3))
# organize grid on X axis
x = tf.reshape(x, tf.stack([1, X * grid_X, Y * grid_Y, channels]))
# back to normal order (not combining with the next step for clarity)
x = tf.transpose(x, (2, 1, 3, 0))
# to tf.image_summary order [batch_size, height, width, channels],
# where in this case batch_size == 1
x = tf.transpose(x, (3, 0, 1, 2))
# scaling to [0, 255] is not necessary for tensorboard
return x
#
# ... and somewhere inside "def train():" after calling "inference()"
#
# Visualize conv1 kernels
with tf.variable_scope('conv1'):
tf.get_variable_scope().reuse_variables()
weights = tf.get_variable('weights')
grid = put_kernels_on_grid (weights)
tf.image.summary('conv1/kernels', grid, max_outputs=1)
@kukuruza
Copy link
Author

kukuruza commented May 4, 2018

@headdab, I already implemented changes suggested by @kitovyj into this gist, and changed pack to stack according to API of the recent TF versions.

@IzzCode
Copy link

IzzCode commented Mar 3, 2020

 raise ValueError("Variable %s does not exist, or was not created with "
                 "tf.get_variable(). Did you mean to set "
                  "reuse=tf.AUTO_REUSE in VarScope?" % name)

Create the tensor to initialize the variable with default value.

ValueError: Variable conv1/weights does not exist, or was not created with tf.get_variable(). Did you mean to set reuse=tf.AUTO_REUSE in VarScope?

I got this error. What should I do?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment