Skip to content

Instantly share code, notes, and snippets.

@mmuratarat
Last active March 18, 2021 13:44
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save mmuratarat/67d78a5d8b1f1c0f31517f3af7c09e4c to your computer and use it in GitHub Desktop.
Save mmuratarat/67d78a5d8b1f1c0f31517f3af7c09e4c to your computer and use it in GitHub Desktop.
import matplotlib.pyplot as plt
import numpy as np
from math import ceil
def multi_convolution2d(input, filter, strides=(1, 1), padding='SAME'):
#This is for multiple filter
if not len(filter.shape) == 4:
raise ValueError("The size of filter should be (filter_height, filter_width, filter_depth, number_of_filters)")
if not len(input.shape) == 3:
raise ValueError("The size of the input should be (input_height, input_width, input_depth)")
if not filter.shape[2] == input.shape[2]:
raise ValueError("the input and the filter should have the same depth.")
input_w, input_h = input.shape[1], input.shape[0] # input width and input height
filter_w, filter_h = filter.shape[1], filter.shape[0] # filter width and filter height
output_d = filter.shape[3] #output_depth
if padding == 'VALID':
output_h = int(ceil(float(input_h - filter_h + 1) / float(strides[0])))
output_w = int(ceil(float(input_w - filter_w + 1) / float(strides[1])))
output = np.zeros((output_h, output_w, output_d)) # convolution output
for ch in range(output_d):# Loop over every channel of the output
for x in range(output_w): # Loop over every pixel of the output
for y in range(output_h):
# element-wise multiplication of the filter and the image
output[y, x, ch] = (filter[:,:, :, ch] * input[y * strides[0]:y * strides[0] + filter_h,
x * strides[1]:x * strides[1] + filter_w, :]).sum()
if padding == 'SAME':
output_h = int(ceil(float(input_h) / float(strides[0])))
output_w = int(ceil(float(input_w) / float(strides[1])))
if input_h % strides[0] == 0:
pad_along_height = max((filter_h - strides[0]), 0)
else:
pad_along_height = max(filter_h - (input_h % strides[0]), 0)
if input_w % strides[1] == 0:
pad_along_width = max((filter_w - strides[1]), 0)
else:
pad_along_width = max(filter_w - (input_w % strides[1]), 0)
pad_top = pad_along_height // 2 #amount of zero padding on the top
pad_bottom = pad_along_height - pad_top # amount of zero padding on the bottom
pad_left = pad_along_width // 2 # amount of zero padding on the left
pad_right = pad_along_width - pad_left # amount of zero padding on the right
output = np.zeros((output_h, output_w, output_d)) # convolution output
# Add zero padding to the input image
image_padded = np.zeros((input.shape[0] + pad_along_height, input.shape[1] + pad_along_width, input.shape[2]))
image_padded[pad_top:-pad_bottom, pad_left:-pad_right, :] = input
for ch in range(output_d):# Loop over every channel of the output
for x in range(output_w): # Loop over every pixel of the output
for y in range(output_h):
# element-wise multiplication of the filter and the image
output[y, x, ch] = (filter[..., ch] * image_padded[y * strides[0]:y * strides[0] + filter_h, x * strides[1]:x * strides[1] + filter_w, :]).sum()
return output
@chojuahn
Copy link

chojuahn commented Mar 18, 2021

Hi,
I am trying to replicate feedforward of a keras model with padding & stides and bumped into this page.
Where do you add bias in both padding and non-padding cases?
I only see dot product of image and filters that is slotted into output without addition of bias

@mmuratarat
Copy link
Author

This is not about modeling. This just does 2D convolutions.

@chojuahn
Copy link

Thanks for the reply.
I was just confused if tensorflow doesn't compute bias when padded multi-filter cases (which is a big surprise) whereas single-filter cases does it with biases in your convolution_one_filter.py

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment