Skip to content

Instantly share code, notes, and snippets.

@realazthat
Last active April 13, 2016 01:55
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save realazthat/21ab404c30eb691a5e14c25b8c601d2f to your computer and use it in GitHub Desktop.
Save realazthat/21ab404c30eb691a5e14c25b8c601d2f to your computer and use it in GitHub Desktop.
Unpermutator.md
"""
Scramble/permute a frame of a video, requires numpy and opencv.
Licensed under CC0 (https://creativecommons.org/publicdomain/zero/1.0/)
"""
import argparse
import cv2
import numpy as np
def main():
parser = argparse.ArgumentParser(description='Extract and permute a frame.')
parser.add_argument('invideo', type=argparse.FileType('rb'),
help='A video to add to the MI frame')
parser.add_argument('--out', dest='outfile', type=argparse.FileType('wb'),
help='path to output frame')
parser.add_argument('--frame', type=int, default=0,
help='Frame number to extract (defaults to 0)')
parser.add_argument('--permute', action='store_true', default=False,
help='Should the output frame be permuted')
parser.add_argument('--gray', action='store_true', default=False,
help='Should the output frame be gray scale')
parser.add_argument('--binary', action='store_true', default=False,
help='Should the output frame be binary')
args = parser.parse_args()
cap = cv2.VideoCapture(args.invideo.name)
expected_frames = int(cap.get(cv2.CAP_PROP_FRAME_COUNT))
width0 = int(cap.get(cv2.CAP_PROP_FRAME_WIDTH))
height0 = int(cap.get(cv2.CAP_PROP_FRAME_HEIGHT))
n0 = width0*height0
ret = True
i = 0
while(ret and cap.isOpened()):
ret, frame0 = cap.read()
if not ret:
break
if i < args.frame:
i += 1
continue
frame = frame0
depth = 3
if args.gray or args.binary:
frame = cv2.cvtColor(frame0, cv2.COLOR_BGR2GRAY)
depth = 1
print (frame.dtype)
print (frame)
if args.binary:
frame >>= 7
frame *= 255
print (frame.dtype)
print (frame)
expected_size = n0*depth
expected_shape = (height0,width0,depth)
if depth == 1:
expected_shape = (height0, width0)
assert frame.size == expected_size
assert frame.shape == expected_shape, (frame.shape, expected_shape, depth)
permuted_frame = np.copy(frame)
permuted_frame = permuted_frame.reshape((height0*width0,depth))
print ('permuted_frame.shape:',permuted_frame.shape)
np.random.shuffle(permuted_frame)
permuted_frame = permuted_frame.reshape((height0,width0, depth))
print (permuted_frame.shape)
outframe = frame
if args.permute:
outframe = permuted_frame
if args.outfile is not None:
cv2.imwrite(args.outfile.name, img=outframe)
return
if __name__ == '__main__':
main()
Display the source blob
Display the rendered blob
Raw
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Display the source blob
Display the rendered blob
Raw
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.

#Question

##Reconstructing a screen of permuted pixels

###Summary

Given a video with the pixel locations randomly permuted (once, for the entire video), can we (efficiently) reconstruct the original picture?

###Details

  • You are given an LCD screen with $n = w \times h$ pixels, that has been altered so that the pixel locations are permuted, once, in some random manner.
  • Watching a video on this LCD screen would obviously result in quite a garbled picture.
  • The permutation is mapped by some unknown function, we shall call $\text{permute}\left(x_0,y_0\right) \implies (x,y)$
  • Each pixel on the screen has a permuted location $(x,y)$ and some unknown original location $(x_0,y_0)$ (such that $\text{permute}\left(x_0,y_0\right)=(x,y)$).
  • You (and your computer) are given to watch a particular video of an average film of average length, with $m$ frames, and having the same resolution as the LCD screen; however for simplicity, assume the video will be in grayscale (for example, each pixel is a single color in the range $[0,256)$).
  • As a frame of reference, you are also informed which pixel is the lowest location in the original non-permuted screen (i.e you are given the values $x$ and $y$, such that $(x_0,y_0) =(0,0)$, and $\text{permute}\left(0,0\right)=(x,y)$).

###Goal

  • Algorithm to (efficiently) compute the mapping of the pixels to their original locations.
  • Can this be done in $o\left(n^2\right)$ space (i.e in better than $\mathcal O\left(n^2\right)$ space?)
  • Can this be done in $o\left(n^2f(m)\right)$ time?
    • I'm not sure how to ask this question correctly, but I am mostly wondering about reducing the $n^2$ factor.
  • Is this a known problem, or similar to any known problems?

PS. What about same question but for the case of binary images for each frame instead of grayscale.

###Appendix

 

Figure 1 Example permuted "LCD screen", side view

Figure 2 Example permuted "LCD screen", front view

 

Figure 3 Example video frame (color)

Figure 4 Example permuted video frame (color)

 

Figure 5 Example video frame (grayscale)

Figure 6 Example permuted video frame (grayscale)

 

Figure 7 Example video frame (binary, using most-significant-bitplane)

Figure 8 Example permuted video frame (binary)

###Related source codes are on gist.github.com

###Credits:

tags: reference-request, information-theory, algorithms, data-structures, probability

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment