Skip to content

Instantly share code, notes, and snippets.

Embed
What would you like to do?
Very fast implementation of the smudge tool in Pythonista using an IOSurfaceWrapper by @JonB (copied in the file). Apple Pencil sensitive by setting applePencil=True
import ui
import math
import numpy as np
from PIL import Image
import Image
import ImageOps
import ImageChops
import io
import scene
import time
from objc_util import *
from ctypes import *
from contextlib import contextmanager
'''
This is a Pythonista implementation of a Smudge tool.
The way it works:
The actual computation of the image happens at the very end of the code, in the while(True) infinite loop. For this computation to be fast, I used the GPU accelerated numpy library allowing to manipulate/add/multiply big vectors elementwise very fast.
The Smudge tool can be seen as infintely iterative operation on the image consisting in mixing a region of the picture with another one. More precisely, when a smudge tool is stroking the image, at any time the circular region it was before is mixed with the region it is now.
A slightly more complex, but a lot more beautiful, smudge tool will actually mix something with the current region of the picture that depends more on the whole strokes than just the previous region (think of a piece of a tissue you stroke on a drawing, the tissue will not only move the colors it passes on, it will actually also absorb them, get dirty, and "remember" them). This is implemented with another vector, called "stamp", alsoin the end of the code.
The only data the algorithm in the while(true) loop needs is the actual strokes. These are made by the user on the screen, so I used a ui.View to get the touch_began, touch_moved, and touch_ended data.
This was really all was needed to compute the smudge result.
Then I needed to display the result image on the screen so that the user can see the image changing as is finger smudges on the screen.
The challenge here was to display the image in real time.
Thanks to @JonB on the Pythonista forum, I was able to display a numpy array in real time (so no conversion to an ui.image was needed), by using his IOdisplayWrapper code.
To display a picture correctly for the eye, we need only an array of int8
However, to compute the smudge correctly, we need to have a numpy array of floats. The reason for that is that if we compute the smudge using int8, some pixels won't ever smudge. Take the following example: Assume we use int8 integers. Say you have a pixel of value 254 next to a pixel of value 255. Picture our circular brush cursor passing onto them in the direction of the pixel 255 to the pixel 254. Say they enter the area of the cursor at time t and leave it at time t+N*dt.
- At the beginning, from t to t+dt, the pixel 255 will be smudged over the pixel 254, trying to make it become, say 254.2, but as it is still an integer, it will be rounded and stay equal to 254
- From t+dt to t+2*dt, the cursor is still smudging 255 on 254, but the same thing happens: 254 wants to become 254.2 but is rounded to 254.
-etc
In the end, when the two pixels leave the area of the brush cursor, the pixel 254 won't have been affected by the smudge brush. It will still be 254. And if smudge again on top of them, the same thing will happen.
This results in persistent ugly spots on the canva.
Now, assume that, on the contrary, we use an array of floats to compute the smudge:
- From t to t+dt, the pixel 254.0 will become
- From t+dt to t+2*dt, as the brush is still smudging on top of them, it will become, say, 254.4
- maybe then it will become 254.55, which, once rounded, gives 255
So the smudge will really have affected it correctly.
So we need two numpy arrays in the code:
- one "computation" array of floats storing the precise pixel values, updated very (very) often to compute a continuous, regular and holeless, smudge. In this code we implemente black and white smudge so we only need 1 float per pixel.
- one "display" array of int8 that is in the IOdisplay object, only for display, having 3 int8 per pixel (we don't have the choice here) but updated a lot less often than the previous array since the eye only notices around 40 frames per seconds, and the updates only consisting in copying the recently changed regions of the array of float in the array of int8 (with a conversion and rounding implicitly happening).
'''
'''THIS PART IS THE @JONB CODE FOR THE IOSURFACEWRAPPER ALLOWING TO DISPLAY A NUMPY ARRAY IN REAL TIME WITHOUT CONVERTING IT TO AN UI.IMAGE:'''
''' define ctypes signatures'''
IOSurfaceCreate=c.IOSurfaceCreate
IOSurfaceCreate.argtypes=[c_void_p]
IOSurfaceCreate.restype=c_void_p
IOSurfaceGetBaseAddress=c.IOSurfaceGetBaseAddress
IOSurfaceGetBaseAddress.argtypes=[c_void_p]
IOSurfaceGetBaseAddress.restype=c_void_p
IOSurfaceGetBytesPerRow=c.IOSurfaceGetBytesPerRow
IOSurfaceGetBytesPerRow.argtypes=[c_void_p]
IOSurfaceGetBytesPerRow.restype=c_size_t
IOSurfaceGetPlaneCount=c.IOSurfaceGetPlaneCount
IOSurfaceGetPlaneCount.argtypes=[c_void_p]
IOSurfaceGetPlaneCount.restype=c_size_t
IOSurfaceLock=c.IOSurfaceLock
IOSurfaceLock.argtypes=[c_void_p, c_uint32, POINTER(c_uint32) ]
IOSurfaceLock.restype=c_int
IOSurfaceUnlock=c.IOSurfaceUnlock
IOSurfaceUnlock.argtypes=[c_void_p, c_uint32, POINTER(c_uint32) ]
IOSurfaceUnlock.restype=c_int
IOSurfaceGetPixelFormat=c.IOSurfaceGetPixelFormat
IOSurfaceGetPixelFormat.argtypes=[c_void_p]
IOSurfaceGetPixelFormat.restype=c_int32
kCVPixelFormat_32RGBA=int.from_bytes(b'RGBA', byteorder='big')
class IOSurfaceWrapper(object):
'''Wraps an IOSurface, exposing a numpy array and view.
.array=numpy array (hxwx4 channels)
.view = ui.View with display layer boind to .array contents
.Lock() context manager for updating array. use .Lock(True) to update, and .Lock(False) to delay redraw, with .redraw() to manually force render
s.Lock context manager must wrap all updates to array.
i.e:
# redraw when context manager exits:
with s.Lock(redraw=True):
s.arrray[...] #manipulate data
#delayed redraw,
with s.Lock(redraw=False):
s.arrray[...] #manipulate data
with s.Lock(redraw=False):
s.arrray[...] #manipulate data
s.redraw()
'''
def __init__(self, width=1024, height=768):
bpp=4 #bytes per pixel, one per color plus alpha
properties=ns(
{ObjCInstance(c_void_p.in_dll(c,'kIOSurfaceWidth')):width,
ObjCInstance(c_void_p.in_dll(c,'kIOSurfaceHeight')):height,
ObjCInstance(c_void_p.in_dll(c,'kIOSurfaceBytesPerElement')):bpp,
ObjCInstance(c_void_p.in_dll(c, 'kIOSurfacePixelFormat')) : kCVPixelFormat_32RGBA
})
self.height=height
self.width=width
self.surf=IOSurfaceCreate(ns(properties))
self.planecount = IOSurfaceGetPlaneCount(self.surf);
self.base = IOSurfaceGetBaseAddress(self.surf);
self.stride = IOSurfaceGetBytesPerRow(self.surf);
stridewidth=self.stride//bpp
data=cast(self.base, POINTER(c_uint8*bpp*(stridewidth)*(self.height)) ).contents
a=np.ctypeslib.as_array(data)
#handle the 'extra' columns.
self.array=a[0:self.height,0:self.width,:]
self.array[:,:,:]=255 #set to white, opaque
self.setupview()
@contextmanager
def Lock(self, redraw = True):
try:
IOSurfaceLock(self.surf, 0, None)
self.base = IOSurfaceGetBaseAddress(self.surf);
yield
finally:
IOSurfaceUnlock(self.surf, 0, None)
if redraw:
self.redraw()
def setupview(self):
self.view=ui.View(frame=(0,0,self.width,self.height))
self._layer=self.view.objc_instance.layer()
self._layer.contentsOpaque=0 #1 may be faster, if alpha channel not needed
self._layer.contentsFlipped=0 #set to 1 if needed
#bind IOSurface to the layer
self._layer.setContents_(c_void_p(self.surf))
@on_main_thread
def redraw(self):
self._layer.setContentsChanged()
def __del__(self):
self._layer.contents=None
del self.array
from objc_util import c, c_void_p
CFRelease=c.CFRelease
CFRelease.argtypes=[c_void_p]
CFRelease.restype=None
CFRelease(self.surf)
'''NOW WE CAN BEGIN THE SMUDGE PROGRAM:'''
'''For debugging. By setting debug=True, you can see the red cursor following the blue one continously to see how the smudge effect really follows the finger'''
debug=False
'''
if you own an Apple Pencil, the smudge will be sensitive to the force by setting applePencil=True
'''
applePencil=False
'''
This function is now only used for the canva initialisation. It converts a PIL image to an ui.Image that can be displayed in real time. It is only used because when I convert the numpy array representing an image to an ui.Image, I intermediately convert it to a PIL image
'''
def pil2ui(imgIn):
with io.BytesIO() as bIO:
imgIn.save(bIO, 'PNG')
imgOut = ui.Image.from_data(bIO.getvalue())
del bIO
return imgOut
'''
The cursorView displays the cursor but also triggers the updates of the display array on every touch_moved event.
'''
class CursorView (ui.View):
#The CursorView will be instanced by the GlobalView which will pass the frame as a parameter.
def __init__(self, frame):
self.frame = frame
self.flex = 'WH'
#Access the dimensions of this view
width=self.frame[2]
height=self.frame[3]
#Now we define the numpy image array of float that will be used for computing the smudge:
self.imageArray=np.ones((height,width,1), dtype=np.float32)
#We draw a black disk at the center to have something to smudge!!!
for i in range(int(width)):
for j in range(int(height)):
self.imageArray[j,i]=int(255*(1.0-(math.sqrt((i-width/2)*(i-width/2)+(j-height/2)*(j-height/2))<0.1*height/2)))
#Here we define the object containing the display image array.
self.display=IOSurfaceWrapper(int(width),int(height))
#And we initialize the display image array by setting it equal to our initial computation array (implicit float to int8 conversion here):
with self.display.Lock():
#Initialize the display array:
self.display.array[:,:,0:3]=self.imageArray
#This parameter represents how much the region to update should be bigger than the cursor. But the code is so fast that 1 is actually good.
self.dirtyRegionFactor=1
#Initialize the cursor at the center of the canvas:
self.cursor=scene.Vector2(self.frame[2]/2,self.frame[3]/2)
#Parameters:
#Force of the smudge, fixed if no Apple Pencil
self.force=0.5
#Radius of the smudge brush
self.cursorRadius=min(width,height)/16
#How much the miniviews will be bigger than the cursor
#isSmudging will be true only after the cursor/brush has started moving:
self.isSmudging=False
#Set debug=True to see this guy. Used to have the smudge apply on a very refined and continuous path rather than the blocky and angular touch_moved path one might have because of a low touch_moved rate (for this reason, the continuous follower is updated in the while(true) loop at the end of the code) :
self.cursorContinuousFollower=self.cursor
def touch_began(self, touch):
#The smudging really begins after the cursor has moved (otherwise, the last position of the cursor would be stamped on the new one)
self.isSmudging=False
#The circle of the cursor has to go to the finger even if it doesn't move
self.cursor = touch.location
#This is the only place the continuous follower is affected outside of the while(true) loop at the end of the code. Because we don't want the cursor to sweep from its last location to the new one here. We want an actual jump. So we need to force this discontinuous behavior.
self.cursorContinuousFollower = touch.location
self.set_needs_display()
def touch_moved(self, touch):
#Getting force sensitivity with the Apple Pencil:
if applePencil:
ui_touch = ObjCInstance(touch)
self.force = ui_touch.force() /2.0
#Updating the cursor
self.cursor = touch.location
#Updating the display array by copying the modified region of the computation image array
with self.display.Lock():
#Center of the region
x,y = self.cursorContinuousFollower
#semi-side of the region:
r=self.cursorRadius*self.dirtyRegionFactor
#update:
self.display.array[y-r:(y+r),x-r:(x+r),0:3]=self.imageArray[y-r:y+r,x-r:x+r]
#display the cursor (a blue circle):
self.set_needs_display()
#Makes sure that we have waited after the cursor has start moving to start smudging (used at the end of the code)
self.isSmudging=True
def touch_ended(self, touch):
self.isSmudging=False
self.set_needs_display()
def draw(self):
r=self.cursorRadius
#draw a red cursor/circle for the follower (debug mode):
if debug:
x,y=self.cursorContinuousFollower
ui.set_color('red')
ui.Path.oval(x-r, y-r, 2*r,2*r).stroke()
#draw a blue cursor/circle for the actual cursor:
x,y=self.cursor
ui.set_color('blue')
ui.Path.oval(x-r, y-r, 2*r,2*r).stroke()
#The only point of the GlobalView is to have the other views (display array view and cursor view on top of it) as subviews.
class GlobalView (ui.View):
def __init__(self, width=1024, height=1024):
self.width=width
self.height=height
#Definition of the cursor view (thus creating a computation numpy image array and a display image array):
self.cursor_view=CursorView(frame=(0,0,width,height))
cv=self.cursor_view
#the views as subsviews of this one:
self.add_subview(cv.display.view)
self.add_subview(cv)
#Actual beginning of the code's instructions:
w, h = ui.get_screen_size()
#Creating a GlobalView (hence, a cursor view, hence an image view and 8 miniviews inside of it, by definition)
mv= GlobalView(w,h)
mv.present('fullscreen')
#Getting the cursor view inside the global view:
cv=mv.cursor_view
#Dereferencing the radius of the brush:
r=cv.cursorRadius
#Creating the mask that will be used for smudging (to smooth out the borders of the smudge instead of having hard edges caused by hard square smudge regions)
#The mask is initialize as an array of zeros
mask=np.zeros((2*r,2*r,1),dtype=np.float32)
#Center of the mask:
c= scene.Vector2(r,r)
for i in range(int(2*r)):
for j in range(int(2*r)):
#Vector representing the (i,j) position on the mask
v=scene.Vector2(i,j)
#d is the distance of (i,j) to the center of the mask, relative to the cursor's radius r, and clamped to not be one (to avoid a division by zero in the next line)
d=min(abs(v-c)/r,0.9999)
#Famous bump function in mathematics, the mask is thus filled with values depending on the distance of (i,j) to the center in a decreasing way, from 1 to 0, and with a zero derivative on the edges and at the center (so that we have a nice bump mask!)
mask[j,i]= math.exp(1-1/(1-d*d))
#Defining the stamp
stamp = np.zeros((2*r,2*r,1),dtype=np.float32)
#A clean stamp is actually not ideal as explained below, so it will be initialized correctly later.
stampInit=False
#This is a parameter controlling how quickly the continuous follower follows the cursor
attraction=0.2
#Infinite smudge loop:
while (mv.on_screen):
if cv.isSmudging:
#Here we compute the next position of the continuous follower and also store it in x,y to use it for the smudge computation below
x,y= cv.cursorContinuousFollower*(1-attraction) +cv.cursor*attraction
#Here we compute how far should the follower be from the border of the canva to avoid problems.
safetyMarge=r*cv.dirtyRegionFactor
#To avoid issues when the cursor crosses the border of the screen, we clamp it to the screen inside the safe area
x=int(max(min(x, w-safetyMarge),safetyMarge))
y=int(max(min(y, h-safetyMarge),safetyMarge))
#Updating the continuous follower to its new and safe positon
cv.cursorContinuousFollower=scene.Vector2(x,y)
#Handles the initialization of the stamp:
if stampInit==False:
#That is a bit tricky: When we start smudging again, the stamp shouldn't be clean, because if it is, it will "erase" the drawing. Ideally, it should be identical to the region it is put on:
stamp=1*cv.imageArray[y-r:y+r, x-r:x+r]
stampInit=True
#The actual smudge computation!!! mixing the stamp with the region around the continuous follower with a 0.1 factor
cv.imageArray[y-r:y+r, x-r:x+r]+=(stamp-cv.imageArray[y-r:y+r, x-r:x+r])*mask*cv.force*0.1
#Updation the stamp to get dirty by the new region:
stamp+=0.1*(cv.imageArray[y-r:y+r, x-r:x+r ]-stamp)
else:
#Deinitialize the stamp for the next time:
stampInit=False
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.