Instantly share code, notes, and snippets.

# jackdoerner/riesz_pyramid.py Last active Oct 29, 2019

Riesz Pyramid Creation and Reconstruction in Python
 """ riesz_pyramid.py Conversion between Riesz and Laplacian image pyramids Based on the data structures and methodoligies described in: Riesz Pyramids for Fast Phase-Based Video Magnification Neal Wadhwa, Michael Rubinstein, Fredo Durand and William T. Freeman Computational Photography (ICCP), 2014 IEEE International Conference on Copyright (c) 2016 Jack Doerner Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. """ import numpy, math import scipy, scipy.signal #riesz_band_filter = numpy.asarray([[-0.5, 0, 0.5]]) #riesz_band_filter = numpy.asarray([[-0.2,-0.48, 0, 0.48,0.2]]) riesz_band_filter = numpy.asarray([[-0.12,0,0.12],[-0.34, 0, 0.34],[-0.12,0,0.12]]) def laplacian_to_riesz(pyr): newpyr = {'I':pyr[:-1], 'R1':[], 'R2':[]} for ii in range(len(pyr) - 1): newpyr['R1'].append( scipy.signal.convolve2d(pyr[ii], riesz_band_filter, mode='same', boundary='symm') ) newpyr['R2'].append( scipy.signal.convolve2d(pyr[ii], riesz_band_filter.T, mode='same', boundary='symm') ) newpyr['base'] = pyr[-1] return newpyr def riesz_to_spherical(pyr): newpyr = {'A':[],'theta':[],'phi':[],'Q':[],'base':pyr['base']} for ii in range(len(pyr['I']) ): I = pyr['I'][ii] R1 = pyr['R1'][ii] R2 = pyr['R2'][ii] A = numpy.sqrt(I*I + R1*R1 + R2*R2) theta = numpy.arctan2(R2,R1) Q = R1 * numpy.cos(theta) + R2 * numpy.sin(theta) phi = numpy.arctan2(Q,I) newpyr['A'].append( A ) newpyr['theta'].append( theta ) newpyr['phi'].append( phi ) newpyr['Q'].append( Q ) return newpyr def riesz_spherical_to_laplacian(pyr): newpyr = [] for ii in range(len(pyr['A'])): newpyr.append( pyr['A'][ii] * numpy.cos( pyr['phi'][ii] ) ) newpyr.append(pyr['base']) return newpyr
 import numpy def symmetrical_boundary(img): #manually set up a symmetrical boundary condition so we can use fftconvolve #but avoid edge effects (h,w) = img.shape imgsymm = numpy.empty((h*2, w*2)) imgsymm[h/2:-(h+1)/2, w/2:-(w+1)/2] = img.copy() imgsymm[0:h/2, 0:w/2] = img[0:h/2, 0:w/2][::-1,::-1].copy() imgsymm[-(h+1)/2:, -(w+1)/2:] = img[-(h+1)/2:, -(w+1)/2:][::-1,::-1].copy() imgsymm[0:h/2, -(w+1)/2:] = img[0:h/2, -(w+1)/2:][::-1,::-1].copy() imgsymm[-(h+1)/2:, 0:w/2] = img[-(h+1)/2:, 0:w/2][::-1,::-1].copy() imgsymm[h/2:-(h+1)/2, 0:w/2] = img[:, 0:w/2][:,::-1].copy() imgsymm[h/2:-(h+1)/2, -(w+1)/2:] = img[:, -(w+1)/2:][:,::-1].copy() imgsymm[0:h/2, w/2:-(w+1)/2] = img[0:h/2, :][::-1,:].copy() imgsymm[-(h+1)/2:, w/2:-(w+1)/2] = img[-(h+1)/2:, :][::-1,:].copy() return imgsymm
 """ rp_laplacian_like.py Conversion between image and laplacian-like pyramids Based on the data structures and methodoligies described in: Riesz Pyramids for Fast Phase-Based Video Magnification Neal Wadhwa, Michael Rubinstein, Fredo Durand and William T. Freeman Computational Photography (ICCP), 2014 IEEE International Conference on Copyright (c) 2016 Jack Doerner Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. """ import numpy, cv2, scipy.signal from rp_boundary import * lowpass = numpy.asarray([ [-0.0001, -0.0007, -0.0023, -0.0046, -0.0057, -0.0046, -0.0023, -0.0007, -0.0001], [-0.0007, -0.0030, -0.0047, -0.0025, -0.0003, -0.0025, -0.0047, -0.0030, -0.0007], [-0.0023, -0.0047, 0.0054, 0.0272, 0.0387, 0.0272, 0.0054, -0.0047, -0.0023], [-0.0046, -0.0025, 0.0272, 0.0706, 0.0910, 0.0706, 0.0272, -0.0025, -0.0046], [-0.0057, -0.0003, 0.0387, 0.0910, 0.1138, 0.0910, 0.0387, -0.0003, -0.0057], [-0.0046, -0.0025, 0.0272, 0.0706, 0.0910, 0.0706, 0.0272, -0.0025, -0.0046], [-0.0023, -0.0047, 0.0054, 0.0272, 0.0387, 0.0272, 0.0054, -0.0047, -0.0023], [-0.0007, -0.0030, -0.0047, -0.0025, -0.0003, -0.0025, -0.0047, -0.0030, -0.0007], [-0.0001, -0.0007, -0.0023, -0.0046, -0.0057, -0.0046, -0.0023, -0.0007, -0.0001] ]) highpass = numpy.asarray([ [0.0000, 0.0003, 0.0011, 0.0022, 0.0027, 0.0022, 0.0011, 0.0003, 0.0000], [0.0003, 0.0020, 0.0059, 0.0103, 0.0123, 0.0103, 0.0059, 0.0020, 0.0003], [0.0011, 0.0059, 0.0151, 0.0249, 0.0292, 0.0249, 0.0151, 0.0059, 0.0011], [0.0022, 0.0103, 0.0249, 0.0402, 0.0469, 0.0402, 0.0249, 0.0103, 0.0022], [0.0027, 0.0123, 0.0292, 0.0469, -0.9455, 0.0469, 0.0292, 0.0123, 0.0027], [0.0022, 0.0103, 0.0249, 0.0402, 0.0469, 0.0402, 0.0249, 0.0103, 0.0022], [0.0011, 0.0059, 0.0151, 0.0249, 0.0292, 0.0249, 0.0151, 0.0059, 0.0011], [0.0003, 0.0020, 0.0059, 0.0103, 0.0123, 0.0103, 0.0059, 0.0020, 0.0003], [0.0000, 0.0003, 0.0011, 0.0022, 0.0027, 0.0022, 0.0011, 0.0003, 0.0000] ]) def getsize(img): h, w = img.shape[:2] return w, h def build_laplacian(img, minsize=2, convolutionThreshold=500, dtype=numpy.float64): img = dtype(img) levels = [] while (min(img.shape) > minsize): if (img.size < convolutionThreshold): convolutionFunction = scipy.signal.convolve2d else: convolutionFunction = scipy.signal.fftconvolve w, h = getsize(img) symmimg = symmetrical_boundary(img) hp_img = convolutionFunction(symmimg, highpass, mode='same')[h/2:-(h+1)/2,w/2:-(w+1)/2] lp_img = convolutionFunction(symmimg, lowpass, mode='same')[h/2:-(h+1)/2,w/2:-(w+1)/2] levels.append(hp_img) img = cv2.pyrDown(lp_img) levels.append(img) return levels def collapse_laplacian(levels, convolutionThreshold=500): img = levels[-1] for ii in range(len(levels)-2,-1,-1): lev_img = levels[ii] img = cv2.pyrUp(img, dstsize=getsize(lev_img)) if (img.size < convolutionThreshold): convolutionFunction = scipy.signal.convolve2d else: convolutionFunction = scipy.signal.fftconvolve w, h = getsize(img) symmimg = symmetrical_boundary(img) symmlev = symmetrical_boundary(lev_img) img = convolutionFunction(symmimg, lowpass, mode='same')[h/2:-(h+1)/2,w/2:-(w+1)/2] img += convolutionFunction(symmlev, highpass, mode='same')[h/2:-(h+1)/2,w/2:-(w+1)/2] return img

### t-fukiage commented Nov 18, 2018

 Thank you for uploading these codes. They are super helpful to me. However, one thing that is strange for me is the use of "cv2.pyrDown" and "cv2.pyrUp" in "build_laplacian" and "collapse_laplacian" functions. As far as I know, "cv2.pyrDown" and "cv2.pyrUp" include Gaussian kernel convolution, which potentially breaks the original design of the lowpass/ highpass kernels by Wadhwa et al.

### tschnz commented Oct 27, 2019

 "cv2.pyrDown" and "cv2.pyrUp" include Gaussian kernel convolution, which potentially breaks the original design of the lowpass/ highpass kernels by Wadhwa et al. That's correct. Instead of using pyrUp/pyrDown, one should use subsampling without interpolation, as well as upsampling with zero induced even cols/rows. To compensate for the lost energy, the lowpass is multiplied with 2. Not tested but for rp_laplacian_like.py one should replace: Row 76: `lp_img = convolutionFunction(symmimg, 2.0*lowpass, mode='same')[h/2:-(h+1)/2,w/2:-(w+1)/2]` Row 79: `img = lp_img[::2, ::2]` Row 89: `img = cv2.resize(img, dstsize=getsize(lev_img), interpolation=cv2.INTER_AREA)` `img[::2, ::2] = 0.0` Row 100: `img = convolutionFunction(symmimg, 2.0*lowpass, mode='same')[h/2:-(h+1)/2,w/2:-(w+1)/2]` That should do the trick.
Owner Author

### jackdoerner commented Oct 27, 2019

 @t-fukiage: Sorry I didn't initially notice your comment! I'm glad you've found this all helpful. Your observation likely explains why I originally had problems with this code. @tschnz: Thank you for posting these corrections. I wrote this code for a film I was working on a few years ago, not expecting it ever to be used for anything else. I was never sure I got everything right, mathematically-speaking. By stunning coincidence, I actually shot another film a few months ago for which I once again need these techniques, so you've been a great help to me as well. As a side-note, I use Reisz pyramids for motion interpolation (i.e. slow-motion) instead of motion magnification as described in the original paper. I find it works dramatically better than any commercial solution for certain kinds of motion (in particular, fire and liquids). I've been meaning to write it up for years, but I still haven't gotten around to it.

### tschnz commented Oct 29, 2019

 @jackdoerner Thank you for writing that up. I initially didn't understand it from what I read in the paper so I got the supplemental Matlab code, a C++ application (with pyrDown/pyrUp) and this Gist. All in all I got it now. Glad I could help. Out of curiosity: How did you interpolate the frames? Any papers or sources referencing to that technique? Sounds interesting!