Skip to content

Instantly share code, notes, and snippets.

@stecman
Last active March 14, 2024 15:56
Show Gist options
  • Star 42 You must be signed in to star a gist
  • Fork 6 You must be signed in to fork a gist
  • Save stecman/91cb5d28d330550a1dc56fa29215cb85 to your computer and use it in GitHub Desktop.
Save stecman/91cb5d28d330550a1dc56fa29215cb85 to your computer and use it in GitHub Desktop.
Lightroom plugin to calculate image crops using OpenCV

Film negative auto-crop plugin for Lightroom 6

This is a proof of concept plugin for Adobe Lightroom 6 that automatically crops scanned film negatives to only the exposed area of the emulsion using OpenCV.

The detection works, but it could be better. Currently it does a single pass:

  1. Mask out extremely bright points (eg. light coming through the sprocket holes)
  2. Threshold the image starting from zero, increasing in steps
  3. At each threshold, collect the rotated bounding rectangle around the largest contour/blob (larger than a minimum size)
  4. Once the largest contour/blob is too large, stop collecting rects
  5. Calculate the crop for the image using the median of the collected rectangles

This works most of the time, but fails on images that threshold to many smaller contours that don't join (eg. one in each corner).

Images of this running can be seen on Hackaday.io. Note that some modifications have been made since the Hackaday post. The exact code demoed in the post can be seen here.

Setup

Running on Windows (rough)

  • OpenCV and Python installed in the Windows 10 Linux Subsystem (because Python+OpenCV natively on Windows is a pain to set up)
  • Xming (X server for Windows) running to allow windows from the Python script

Running on OSX

The easiest way to install OpenCV at the time of writing is through Homebrew:

# Install OpenCV 3.x with Python Bindings
brew install opencv@3

# Let Python know about the OpenCV bindings
for dir in $(find $(brew --prefix opencv@3)/lib -maxdepth 2 -name 'site-packages'); do \
    _pythonVersion=$(basename $(dirname "$dir"))
    _pathfile="/usr/local/lib/$_pythonVersion/site-packages/opencv3.pth"; \
    echo "Adding $_pathfile"; \
    echo "$dir" > "$_pathfile"; \
done

# Check it worked
python -c 'import cv2' && echo 'OK!'

Then clone this Gist into your Lightroom plugin folder:

cd "$HOME/Library/Application Support/Adobe/Lightroom/Modules/"
git clone https://gist.github.com/91cb5d28d330550a1dc56fa29215cb85.git AutoCrop.lrplugin

Restart Lightroom and you should now see "Negative Auto Crop" listed under File -> Plug-in Manager. Use File -> Plug-in Extras -> Auto Crop Negative to run the script.

Notes

It's easiest to hack on the Python script by running it directly with a test image, rather than running it through Lightroom. Running from Lightroom is slower and you'll only see an exit code if the script has a problem.

The Python and Lua components of this are independent; you can switch the Python script out for any external program, as long as it writes the same data out for Lightroom.

Communication between Lua and Python

The Lightroom API doesn't provide a way to read any output stream from a subprocess, so the crop data computed in Python is written to a text file and picked up by the Lua plugin.

The format of this file is five numbers separated by new lines. The first four numbers are edge positions in the range 0.0 to 1.0 (factors of the image dimension). The last number is the rotation/straightening angle in the range -45.0 to 45.0:

Left edge
Right edge
Top edge
Bottom edge
Rotation

In practice this looks like:

0.027
0.974
0.03333333333333333
0.982
-0.1317138671875

These numbers are always relative to the exported image's orientation. The Lua side handles any rotation needed to match the internal orientation of the image in Lightroom.

Lightroom's Lua API

Lightroom's API is very poorly documented (unless I'm missing some newer docs that Adobe has locked away behind a login). It doesn't appear to be intended for anything other than exporting to custom APIs - seems strange considering how extensible Photoshop is with scripts and plugins.

Images can be cropped through the Lightroom Lua API using the parameters CropLeft, CropRight, CropTop, and CropBottom. These aren't listed on the LrDevelopController page of the SDK docs, but are listed in the docs under LrPhoto:getDevelopSettings. Note that the sides (top, right, etc) are always relative to the orientation AB, not necessarily the top, right, etc of the exported image.

The orientation param is a two character string that represents the two corners at the top of the image:

AB:         BC:       CD:         DA:

A-----B     B---C     C-----D     D---A
|     |     |   |     |     |     |   |
D-----C     |   |     B-----A     |   |
            A---D                 C---B

(Each of these is rotated anti-clockwise by 90 degrees)

In my testing, orientation couldn't be read using LrDevelopController:getValue(), but I could retrieve it using LrPhoto:photo:getDevelopSettings.

-- LR imports
local LrApplication = import("LrApplication")
local LrApplicationView = import("LrApplicationView")
local LrBinding = import("LrBinding")
local LrDevelopController = import("LrDevelopController")
local LrDialogs = import("LrDialogs")
local LrExportSession = import("LrExportSession")
local LrFileUtils = import("LrFileUtils")
local LrFunctionContext = import("LrFunctionContext")
local LrLogger = import("LrLogger")
local LrPathUtils = import("LrPathUtils")
local LrProgressScope = import("LrProgressScope")
local LrTasks = import("LrTasks")
local log = LrLogger("AutoCrop")
log:enable("logfile")
-- Global settings
local scriptPath = LrPathUtils.child(_PLUGIN.path, "detect.py")
-- Template string to run Python scripts
local pythonCommand = "/usr/local/bin/python __ARGS__"
if WIN_ENV then
-- Run Python through the Linux sub-system on Windows
pythonCommand = "bash -c 'DISPLAY=:0 python __ARGS__'"
end
-- Create directory to save temporary exports to
local imgPreviewPath = LrPathUtils.child(_PLUGIN.path, "render")
if LrFileUtils.exists(imgPreviewPath) ~= true then
LrFileUtils.createDirectory(imgPreviewPath)
end
local catalog = LrApplication.activeCatalog()
function setCrop(photo, angle, cropLeft, cropRight, cropTop, cropBottom)
if LrApplicationView.getCurrentModuleName() == "develop" and photo == catalog:getTargetPhoto() then
LrDevelopController.setValue("CropConstrainAspectRatio", false)
LrDevelopController.setValue("straightenAngle", angle)
LrDevelopController.setValue("CropLeft", cropLeft)
LrDevelopController.setValue("CropRight", cropRight)
LrDevelopController.setValue("CropTop", cropTop)
LrDevelopController.setValue("CropBottom", cropBottom)
else
local settings = {}
settings.CropConstrainAspectRatio = false
settings.CropLeft = cropLeft
settings.CropRight = cropRight
settings.CropTop = cropTop
settings.CropBottom = cropBottom
settings.CropAngle = -angle
photo:applyDevelopSettings(settings)
end
end
-- Convert a Windows absolute path to a Linux Sub-Sytem path
function fixPath(winPath)
-- Do nothing on OSX
if MAC_ENV then
return winPath
end
-- Replace Windows drive with mount point in Linux subsystem
local path = winPath:gsub("^(.+):", function(c)
return "/mnt/" .. c:lower()
end)
-- Flip slashes the right way
return path:gsub("%\\", "/")
end
-- Given a string delimited by whitespace, split into numbers
function splitLinesToNumbers(data)
result = {}
for val in string.gmatch(data, "%S+") do
result[#result+1] = tonumber(val)
end
return result
end
function rotateCropForOrientation(crop, orientation)
if orientation == "AB" then
-- No adjustments needed: this is the orientation of the data
return rawCrop
elseif orientation == "BC" then
return {
right = crop.bottom,
bottom = 1 - crop.left,
left = crop.top,
top = 1 - crop.right,
angle = crop.angle,
}
elseif orientation == "CD" then
return {
bottom = 1 - crop.top,
left = 1 - crop.right,
top = 1 - crop.bottom,
right = 1 - crop.left,
angle = crop.angle,
}
elseif orientation == "DA" then
return {
left = 1 - crop.bottom,
top = crop.left,
right = 1 - crop.top,
bottom = crop.right,
angle = crop.angle,
}
end
end
function processPhotos(photos)
LrFunctionContext.callWithContext("export", function(exportContext)
local progressScope = LrDialogs.showModalProgressDialog({
title = "Auto negative crop",
caption = "Analysing image with OpenCV",
cannotCancel = false,
functionContext = exportContext
})
local exportSession = LrExportSession({
photosToExport = photos,
exportSettings = {
LR_collisionHandling = "rename",
LR_export_bitDepth = "8",
LR_export_colorSpace = "sRGB",
LR_export_destinationPathPrefix = imgPreviewPath,
LR_export_destinationType = "specificFolder",
LR_export_useSubfolder = false,
LR_format = "JPEG",
LR_jpeg_quality = 1,
LR_minimizeEmbeddedMetadata = true,
LR_outputSharpeningOn = false,
LR_reimportExportedPhoto = false,
LR_renamingTokensOn = true,
LR_size_doConstrain = true,
LR_size_doNotEnlarge = true,
LR_size_maxHeight = 1500,
LR_size_maxWidth = 1500,
LR_size_units = "pixels",
LR_tokens = "{{image_name}}",
LR_useWatermark = false,
}
})
local numPhotos = exportSession:countRenditions()
local renditionParams = {
progressScope = progressScope,
renderProgressPortion = 1,
stopIfCanceled = true,
}
for i, rendition in exportSession:renditions(renditionParams) do
-- Stop processing if the cancel button has been pressed
if progressScope:isCanceled() then
break
end
-- Common caption for progress bar
local progressCaption = rendition.photo:getFormattedMetadata("fileName") .. " (" .. i .. "/" .. numPhotos .. ")"
progressScope:setPortionComplete(i - 1, numPhotos)
progressScope:setCaption("Processing " .. progressCaption)
rendition:waitForRender()
local photoPath = rendition.destinationPath
local dataPath = photoPath .. ".txt"
-- Build a command line to run a Python script on the exported image
local cmd = pythonCommand:gsub("__ARGS__", '"' .. fixPath(scriptPath) .. '" "' .. fixPath(photoPath) .. '"')
log:trace("Executing: " .. cmd)
exitCode = LrTasks.execute(cmd)
if exitCode ~= 0 then
LrDialogs.showError("The Python script exited with a non-zero status: " .. exitCode .. "\n\nCommand line was:\n" .. cmd )
break
end
if LrFileUtils.exists(dataPath) == false then
LrDialogs.showError("The Python script exited cleanly, but the output data file was not found:\n\n" .. dataPath)
break
end
-- Read crop points from analysis output
-- The directions/sides here are relative to the image that was processed
rawData = LrFileUtils.readFile(dataPath)
cropData = splitLinesToNumbers(rawData)
rawCrop = {
left = cropData[1],
right = cropData[2],
top = cropData[3],
bottom = cropData[4],
angle = cropData[5],
}
-- Re-orient cropping data to "AB" so the crop is applied as intended
-- (Crop is always relative to the "AB" orientation in Lightroom)
developSettings = rendition.photo:getDevelopSettings()
crop = rotateCropForOrientation(rawCrop, developSettings["orientation"])
LrTasks.startAsyncTask(function()
catalog:withWriteAccessDo("Apply crop", function(context)
setCrop(
rendition.photo,
crop.angle,
crop.left,
crop.right,
crop.top,
crop.bottom
)
end, {
timeout = 2
})
end)
LrFileUtils.delete(photoPath)
LrFileUtils.delete(dataPath)
end
end)
end
-- Collect photos to operate on
local targetPhotos = {}
if LrApplicationView.getCurrentModuleName() == "develop" then
targetPhotos[1] = catalog.targetPhoto
elseif LrApplicationView.getCurrentModuleName() == "library" then
targetPhotos = catalog.targetPhotos
end
-- Run autocrop
LrTasks.startAsyncTask(function()
-- Reset all crops so the exports can be processed properly
LrDevelopController.resetCrop()
-- Process crops externally and apply
processPhotos(targetPhotos)
end)
return {}
import cv2
import copy
import math
import numpy as np
import os
import sys
# Detect OpenCV 2.x vs 3.x
from pkg_resources import parse_version
IS_OPENCV_2 = parse_version(cv2.__version__) < parse_version('3.0.0')
# Alias BoxPoints as this lives in a different place in OpenCV 2 and 3
if IS_OPENCV_2:
BoxPoints = cv2.cv.BoxPoints
else:
BoxPoints = cv2.boxPoints
# Detection settings
MAX_COVERAGE = 0.98
INSET_PERCENT = 0.005
def thresholdImage(img, lowerThresh, ignoreMask):
_, binary = cv2.threshold(img, lowerThresh, 255, cv2.THRESH_BINARY_INV) # THRESH_TOZERO_INV
# binary = cv2.bitwise_not(binary)
binary = cv2.bitwise_and(ignoreMask, binary)
# Prevent tiny outlier collections of pixels spoiling the rect fitting
kernel = np.ones((5,5),np.uint8)
binary = cv2.dilate(binary, kernel, iterations = 3)
binary = cv2.erode(binary, kernel, iterations = 3)
return binary
def findLargestContourRect(binary):
largestRect = None
largestArea = 0
# Find external contours of all shapes
if IS_OPENCV_2:
contours,_ = cv2.findContours(binary, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
else:
_, contours, hierarchy = cv2.findContours(binary, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
for cnt in contours:
area = cv2.contourArea(cnt)
# Keep track of the largest area seen
if area > largestArea:
largestArea = area
largestRect = cv2.minAreaRect(cnt)
return largestRect, largestArea
def findNonZeroPixelsRect(binary):
edges = copy.copy(binary)
if IS_OPENCV_2:
contours,_ = cv2.findContours(edges, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
else:
_, contours, hierarchy = cv2.findContours(binary, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
nonZero = cv2.findNonZero(edges)
if nonZero is None:
return None, 0, binary
rect = cv2.minAreaRect(nonZero)
area = rect[1][0] * rect[1][1]
return rect, area
def normaliseRectRotation(rawRects):
"""
Normalize rect orientation to have an angle between -45 and 45 degrees
Rects generated by OpenCV are can be "portrait" with a near -90 angle and flipped height/width
To combine and compare rects meaningfully, they need to all have the same orientation.
"""
rects = []
for rect in rawRects:
center = rect[0]
size = rect[1]
angle = rect[2]
if angle < -45:
rect = (
center,
(size[1], size[0]),
angle + 90
)
rects.append(rect)
return rects
def medianRect(rects):
if len(rects) == 0:
return None
rects = normaliseRectRotation(rects)
# Sort rects by area
rects.sort(key=lambda rect: rect[1][0] * rect[1][1])
median = (
(np.median([r[0][0] for r in rects]), np.median([r[0][1] for r in rects])),
(np.median([r[1][0] for r in rects]), np.median([r[1][1] for r in rects])),
np.median([r[2] for r in rects])
)
return median
def correctAspectRatio(rect, targetRatio = 1.5, maxDifference = 0.3):
"""
Return an aspect-ratio corrected rect (and success flag)
Args:
rect (OpenCV RotatedRect struct)
targetRatio (float): Ratio represented as the larger image dimension divided by the smaller one
"""
# Indexes into the rect nested tuple
CENTER = 0; SIZE = 1; ANGLE = 2
X = 0; Y = 1;
size = rect[SIZE]
aspectRatio = max(size[X], size[Y]) / float(min(size[X], size[Y]))
aspectError = targetRatio - aspectRatio
# Factor out orientation to simplify logic below
# This assumes the larger dimension as X
if size[X] == max(size[X], size[Y]):
rectWidth = size[X]
rectHeight = size[Y]
widthDim = X
heightDim = Y
else:
rectHeight = size[X]
rectWidth = size[Y]
widthDim = Y
heightDim = X
# Only attempt to correct aspect ratio where the ROI is roughly right already
# This prevents odd results for poor outline detection
if abs(aspectError) > maxDifference:
return rect, False
# Shrink width if the ratio was too wide
if aspectRatio > targetRatio:
print "ratio too large", aspectError
rectWidth = size[heightDim] * targetRatio
# Shrink height if the ratio was too tall
elif aspectRatio < targetRatio:
print "ratio too small", aspectError
# rectWidth = size[heightDim] * targetRatio
rectHeight = size[widthDim] / targetRatio
# Apply new width/height in the original orientation
if widthDim == X:
newSize = (rectWidth, rectHeight)
else:
newSize = (rectHeight, rectWidth)
newRect = (rect[CENTER], newSize, rect[ANGLE])
return newRect, True
def findExposureBounds(img, showOutputWindow=False):
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
# Smooth out noise
# gray = cv2.GaussianBlur(gray,(5,5),0)
gray = cv2.bilateralFilter(gray, 11, 17, 17)
# Maximise brightness range
gray = cv2.equalizeHist(gray)
# Create a mask to ignore the brightest spots
# These are usually where there is no film covering the light source
_, ignoreMask = cv2.threshold(gray, 240, 255, cv2.THRESH_BINARY)
# Expand masked out area slightly to include adjacent edges
kernel = np.ones((3,3),np.uint8)
ignoreMask = cv2.dilate(ignoreMask, kernel, iterations = 3)
# Create a mask to ignore areas of low saturation
# When white balanced against the film stock, this is usually low saturation
hsv = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)
hsv = cv2.GaussianBlur(hsv, (5,5), 0)
satMask = cv2.inRange(hsv, (0, 0, 0), (255, 7, 255))
# Combine saturation and brightness masks, then flip
ignoreMask = cv2.bitwise_or(ignoreMask, satMask)
ignoreMask = cv2.bitwise_not(ignoreMask)
# Get min/max region of interest areas
height, width, _ = img.shape
maxArea = (height * MAX_COVERAGE) * (width * MAX_COVERAGE)
minCaptureArea = maxArea * 0.65
# algos = [findNonZeroPixelsRect]
algos = [findLargestContourRect]
results = []
for func in algos:
lowerThreshold = 0
while lowerThreshold < 240:
binary = thresholdImage(gray, lowerThreshold, ignoreMask)
debugImg = cv2.cvtColor(binary, cv2.COLOR_GRAY2BGR)
rect, area = func(binary)
# Stop once a valid result is returned
if area >= maxArea:
break
if area >= minCaptureArea:
results.append(rect)
lowerThreshold += 5
# Draw in green for results that are collected
debugLineColour = (0, 255, 0)
else:
lowerThreshold += 5
# Draw in red for areas that were too small
debugLineColour = (0, 0, 255)
if showOutputWindow:
if rect is not None:
# Get a rectangle around the contour
rectPoints = BoxPoints(rect)
rectPoints = np.int0(rectPoints)
cv2.drawContours(debugImg, [rectPoints], -1, debugLineColour, 3)
# Draw threshold on debug output
cv2.putText(
img=debugImg,
text='Threshold: ' + str(lowerThreshold),
org=(20, 30),
fontFace=cv2.FONT_HERSHEY_PLAIN,
fontScale=2,
color=(0, 150, 255),
lineType=4
)
cv2.imshow('image', cv2.resize(debugImg, (0,0), fx=0.75, fy=0.75) )
cv2.waitKey(1)
return medianRect(results)
if __name__ == '__main__':
import argparse
parser = argparse.ArgumentParser(description='Find crop for film negative scan')
parser.add_argument('files', nargs='+', help='Image files to perform detection on (JPG, PNG, etc)')
args = parser.parse_args()
hasDisplay = os.getenv('DISPLAY') != None
for filename in args.files:
if not os.path.exists(filename):
print "ERROR:"
print "Could not find file '%s'" % filename
sys.exit(5)
# read image and convert to gray
img = cv2.imread(filename, cv2.IMREAD_UNCHANGED)
# cv2.imshow('image', cv2.resize(img, (0,0), fx=0.75, fy=0.75) )
# cv2.waitKey(0)
rawRect = findExposureBounds(img, showOutputWindow=hasDisplay)
# Outputs for Lightroom
cropLeft = 0
cropRight = 1.0
cropTop = 0
cropBottom = 1.0
rotation = 0
if rawRect is not None:
# Average height and width of the detected area to get a constant inset
insetPixels = ((rawRect[1][0] + rawRect[1][1]) / 2.0) * INSET_PERCENT
insetRect = (
rawRect[0], # Center
(rawRect[1][0] - insetPixels, rawRect[1][1] - insetPixels), # Size
rawRect[2] # Rotation
)
rect, aspectChanged = correctAspectRatio(insetRect)
boxWidth = rect[1][0]
boxHeight = rect[1][1]
box = np.int0(BoxPoints(rect))
# # Create a mask that excludes areas that are probably the directly visible light source
# _, wbMask = cv2.threshold(gray, 253, 0, cv2.THRESH_TOZERO)
# wbMask = cv2.bitwise_not(wbMask)
# # Mask out the detected frame - we only want to look at the base film layer
# cv2.fillConvexPoly(wbMask, box, 0)
# # cv2.imshow('image', wbMask )
# # cv2.waitKey(0)
# # bgr = cv2.mean(img, wbMask)
# lab = cv2.mean(cv2.cvtColor(img, cv2.COLOR_BGR2LAB), wbMask)
# # print [i for i in reversed(bgr)]
# tint = lab[1] - 127
# temperature = lab[2] - 127
# print (lab[0]/255.0)*100, temperature, tint
# Lightroom doesn't support rotation more than 45 degrees
# The detected rect usually includes a 90 degree rotation for landscape images
rotation = -rect[2]
if rotation > 45:
rotation -= 90
elif rotation < -90:
rotation += 45
# Calculate crops in a format for Lightroom (0.0 to 1.0 for each edge)
centerX = rect[0][0]
centerY = rect[0][1]
# Use the average distance from each side as the crop in Lightroom
imgHeight, imgWidth, _ = img.shape
top = []; left = []; right = []; bottom =[]
for point in box:
# point = rotateAroundPoint(point, math.radians(rotation))
if point[0] > centerX:
right.append( point[0] )
else:
left.append( point[0] )
if point[1] > centerY:
bottom.append( point[1] )
else:
top.append( point[1] )
cropRight = (min(right)) / float(imgWidth)
cropLeft = (max(left)) / float(imgWidth)
cropBottom = (min(bottom)) / float(imgHeight)
cropTop = (max(top)) / float(imgHeight)
# Draw original detected area
rawBox = np.int0(BoxPoints(rawRect))
cv2.drawContours(img, [rawBox], -1, (255, 0, 0), 1)
# Draw inset area
insetBox = np.int0(BoxPoints(insetRect))
cv2.drawContours(img, [insetBox], -1, (0, 255, 255), 1)
# Draw adjusted aspect ratio area
cv2.drawContours(img, [box], -1, (0, 255, 0), 2)
cv2.circle(img, (int(rect[0][0]), int(rect[0][1])), 3, (0, 255, 0), 3)
# Write result to disk for Lightroom plugin to pick up
# (The Lightroom API doesn't appear to allow streaming in output from a program)
cropData = [
cropLeft,
cropRight,
cropTop,
cropBottom,
rotation
]
for v in cropData:
print v
with file(filename + ".txt", 'w') as out:
out.write("\r\n".join(str(x) for x in cropData))
cv2.imwrite(filename + "-analysis.jpg", img)
# if hasDisplay:
# cv2.imshow('image', cv2.resize(img, (0,0), fx=0.75, fy=0.75) )
# cv2.waitKey(0)
return {
LrSdkVersion = 6.0,
LrSdkMinimumVersion = 6.0,
LrToolkitIdentifier = 'nz.co.stecman.negativeautocrop',
LrPluginName = "Negative Auto Crop",
LrExportMenuItems = {
{
title = "Auto &Crop Negative",
file = "AutoCrop.lua",
enabledWhen = "photosSelected"
}
},
VERSION = {
major=1,
minor=0,
revision=0,
}
}
@Herbert1630
Copy link

Hi ! thank you this is very useful. I need to crops a massive number of pictures (i would say around 29.000).
I have no knowledge about computers and coding, but I managed to install the plug in. Now in LR I have this error: 32512
What should I do? What's the likely cause?
thank you!

@stecman
Copy link
Author

stecman commented Dec 8, 2019

@Herbert1630 unfortunately the Lightroom error codes aren't very meaningful, but I assume the call to run the Python script is failing for some reason. There should be an AutoCrop.log file in your Documents folder that lists the command it executed:

Executing: bash -c 'DISPLAY=:0 python "/mnt/c/Users/Stecman/Documents/Lightroom/NegativeAutoCrop.lrplugin/detect.py" "/mnt/c/Users/Stecman/Documents/Lightroom/NegativeAutoCrop.lrplugin/render/_MG_2398.jpg"' 

If you copy everything after Executing: and run it in Powershell on Windows or Terminal on Mac OSX, you'll get a proper error message that says what the problem is.

Do note that while this setup works, it really is just a proof of concept at this stage and needs further work to apply reliably - especially at the scale you're looking at. If you want to reach out to the email on my Github profile, I'd be happy to work with you to get this to a more generally usable state.

@mngyuan
Copy link

mngyuan commented Jun 8, 2020

Any chance of making this work for LR CC? I've got the script running but it just toggles the crop tool on and closes it without doing anything.

@stecman
Copy link
Author

stecman commented Jun 9, 2020

Have you tested the Python script is giving expected results outside of Lightroom, @mngyuan?

I haven't tested this with LR CC, but it sounds like the Lua side is working at least if it's getting to the cropping stage. Since you're not seeing any error dialogs, I suspect the plugin may be working but the crop returned from the Python script is 0, 1, 0, 1 (ie. no crop).

@mngyuan
Copy link

mngyuan commented Jun 11, 2020

@stecman you're correct, I'm getting the following output

~  ➜  /usr/local/bin/python "/Users/phorust/Downloads/AutoCrop.lrplugin/detect.py" "/Users/phorust/Downloads/AutoCrop.lrplugin/render/HOMED_00107.jpg-analysis.jpg"
0
1.0
0
1.0
0

I thought potentially it was because of the nature of the scan (my scans have the negative holder in frame), or because I had already converted it into a positive through Negative Lab Pro, but neither of those things seem to have an effect.

I may dive in deeper later if I have time ..., so any pointers on where to start are appreciated!

@stecman
Copy link
Author

stecman commented Jun 11, 2020

@mngyuan this example is tuned pretty specifically for my negative captures with emulsion to the edges of the frame, so it will likely need some changes for other setups. Currently I don't think it will work for positives as it tries to find a dark area (exposure) inside a lighter one (unexposed part of the film).

A few pointers:

  • You should see a preview window showing the detection as it's running. You may have to force hasDisplay to True on OSX if you're not seeing this. With the preview it should be fairly evident where it's going wrong.

    cv2.imshow calls like the one commented out at the end are useful for inspecting an image at various points.

  • There's two detection methods in the python script (see the algos variable). One of them may work better for your images:

    • findLargestContourRect: Finds the largest contiguous blob of pixels in a thresholded image and returns its bounds.
    • findNonZeroPixelsRect: Returns bounds that contain all white pixels in a thresholded image.
  • Don't worry about the LR side until the Python script is doing what you want - it's just some glue to push cropping data into Lightroom and makes debugging difficult.

@JackyChiu
Copy link

Thanks for making this! It works pretty well with scans done without a film strip holder since you get more borders.

I'm on macOS and was getting the error The Python script exited with a non-zero status: 256 .... Like the lua script comment said I had to change it to the absolute path of my python binary (python3 for me). Just in case anyone else runs into this issue.

@ueuecoyotl
Copy link

Everything seemed to be gaining swimmingly, then I ran the check and got this back:

Traceback (most recent call last):
File "", line 1, in
ImportError: No module named cv2

@RobocoderWang
Copy link

local pythonCommand = "python3.9 ARGS", I get error

@just-w
Copy link

just-w commented Sep 30, 2021

use WSL 2 (ubuntu20.04)
import correctly found cv2
but when i try to use plugin - got this error:
image

@stecman
Copy link
Author

stecman commented Sep 30, 2021

@just-w see my response in this comment above

@taras-sereda
Copy link

@Herbert1630 return code 32512 maps to 127, no such file or directory. Which means that Lua can't find a path to your python interpreter.
Try to provide an absolute path instead of a relative one - this might help.

@stecman thank you for the plugin! Just wanted to let you know that it work with Lightroom classic as well.

@taras-sereda
Copy link

@stecman have you figured how to differentiate stack traces coming from
python script; os; or lua?
I'm wondering because 32512/127 is not related to python script logic. It's a result of
LrTasks.execute(cmd) call. And now I'm curious how one can tell from the resulting stack trace where to dig next?
Because in my case everything worked find for calls from the terminal, but failed only when I was executing lua part.

@stecman
Copy link
Author

stecman commented Oct 16, 2021

@taras-sereda so running the exact command line logged in the AutoCrop.log file (including bash.exe -c) works, but running from the Lua script exists with 127?

  • Windows uses exit code 9009 for command not found
  • Bash uses 127 for command not found
  • The python interpreter will also exit with 127 if it can't find the script to execute

So it' appears to be getting into WSL ok, but either the python command isn't accessible, or the Python script path is inaccessible. With a fresh install of WSL2 ubuntu-20.04, it looks like python doesn't exist, but python3 does:

  • bash -c 'python -V' -> code 127
  • bash -c 'python3 -V' -> success

This might be your and @just-w's problem

@avegancafe
Copy link

avegancafe commented Oct 17, 2021

Anyone have the issue where the prompt won't close after running? I'm trying to debug this and have two main problems: 1/ it looks like the plugin's not actually cropping (it basically thinks the "largest rectangle" seems to be the whole image), and 2/ I can't hit "cancel" on the dialog box when the process is done

Sample `analysis` photo after running default algo

DSC00957 jpg-analysis

Plugin stuck after processing

image

Edit:
While trying to edit the MAX_COVERAGE, I noticed that I was able to get it to identify a contour, but it appears as if it's not level and the height is huge because it's skewed, so it still doesn't crop (probably thinks that the crop size is bigger than the image size or something).

New contour that kind of works?

DSC00957 jpg-analysis

@avegancafe
Copy link

avegancafe commented Oct 17, 2021

I also tried running the other algorithm 'cause I figured maybe the issue was with the algo to find the new crop and I get this error:

AutoCrop.lrplugin on  master [!?] via 🌙 via 🐍 v3.9.7
λ /usr/local/bin/python "/Users/kyle/Library/Application Support/Adobe/Lightroom/Modules/AutoCrop.lrplugin/detect.py" "/Users/kyle/Library/Application Support/Adobe/Lightroom/Modules/AutoCrop.lrplugin/render/DSC00957.jpg"
Traceback (most recent call last):
  File "/Users/kyle/Library/Application Support/Adobe/Lightroom/Modules/AutoCrop.lrplugin/detect.py", line 295, in <module>
    rawRect = findExposureBounds(img, showOutputWindow=hasDisplay)
  File "/Users/kyle/Library/Application Support/Adobe/Lightroom/Modules/AutoCrop.lrplugin/detect.py", line 225, in findExposureBounds
    rect, area = func(binary)
ValueError: too many values to unpack (expected 2)

Edit:
Just realized this might be because of this line... any reason why it's returning 3 values here, when only 2 are used?

@stecman
Copy link
Author

stecman commented Oct 17, 2021

@keyboard-clacker I haven't seen the issue with the prompt not closing, unless you've uncommented one of the cv2.waitKey(0) lines in which case the preview window will be waiting to recieve a key press.

Regarding your input image, that won't work with this proof of concept unfortunately. This detects a darker area (exposure) surrounded completely by a lighter area that extends to the edge of the image (film base with light shining through). I've been working on a more practical version of this to release as ready-packaged software, and the main challenge is making it compatible with all of the various scanning setups as everyone has different gear and frames differently.

The 3 returned values vs 2 is an OpenCV API thing - you might have OpenCV 4 which has the same findContours return values as OpenCV 2.

@avegancafe
Copy link

avegancafe commented Oct 17, 2021

Ahh fascinating thanks @stecman. I was actually able to work through the weird dev issues to find a solution that actually works for me, where basically instead of finding the "largest" rect, I find the smallest rect bigger than a given size (in my case, my rectangles are ~60% of the image, and my min size for a rect I set to ~30% of that). This actually gives me something like this for my analysis file!

Working analysis file!

DSC00957 jpg-analysis

However, I still can't get past that hanging process issue... in the script I notice there's a cv2.waitKey(1) , might that have something to do with it? The only other thing I could think of is that maybe it's because it's waiting on the tasks run with that LrTasks.startAsyncTask to be declared as done, and it's not calling some "done" callback or something? Fully spitballing here right now haha, never worked with this API before just comparing it to similar async task runners I've used in the past like js testing frameworks or gulp or something

EDIT:

An interesting finding that I can't quite figure out how to work around: RIght before I see that "task completed" dialog, what looks like an error message flashes on my screen extremely quickly, far too fast to read. Might be because of an error with setting the crop?

EDIT 2:

I found out what it was (probably), my develop settings orientation is DC. Maybe because I flipped the image horizontally or something?

proof of weird unaccounted for orientation

image

@DOS9570
Copy link

DOS9570 commented Feb 3, 2022

Hey fascinating work.
Ive been looking for quite some time for such a cool script.
however when i try it i get: ValueError: not enough values to unpack (expected 3, got 2)
i traced it back to version incompatibility of cv2 and cv4 (the module are calles both cv2).
Unfortunatly i cannot get the older package installed via pip.
do you have any idea how i could get this running?

@puik
Copy link

puik commented Jul 28, 2022

I wonder could this plugin be modified so that it could be used with LR4.4 (it seems to be missing the LrApplicationView)?

@stecman
Copy link
Author

stecman commented Aug 1, 2022

@puik probably, but the hard part is finding documentation for the LR4 API. It was tricky enough to find the LR6 docs!

@lcooperdesign
Copy link

Wow this is fascinating. I use Lightroom to crop hundreds of fashion photography images, all with varying distances from the model. At the very least I need a full-length crop of each model with about 1.5% above the head and below the feet with the model's eyes/face used as the marker for centering in the frame of pre-set crop ratio (2.07 x 3). Would it be possible to write a plug-in for Lightroom Classic that does this? Mars Premedia's Auto Crop plug-in for Photoshop does this brilliantly, albeit destructively. Would love to hear your thoughts because I think there are hundreds of e-commerce sites that'd bite your hand off for such a tool.

@adrienafl
Copy link

adrienafl commented Aug 8, 2023

Hey ! On the paper this is exactly what I need. In practice, it looks like the script is broken. Is there anyone who's still using it with Lightroom Classic ? Otherwise I could consider working on an update to save myself hours of cropping.
@stecman did you upload this gist in a dedicated repo so we can create some PR ?

@stecman
Copy link
Author

stecman commented Aug 8, 2023

Hey @adrienafl, the image analysis part here is a proof-of-concept, not so much ready to use. It's more of a starting point for development with plumbing for getting an external cropping application working with Lightroom through its very limited API.

I'm happy to move this to a repo if it would help people get it running with the right dependency versions. That would let you hack around and define an algorithm that works good enough for your images. The image analysis is the hard part - getting something that works for everyone really needs a completely different pipeline, but if your scans are consistent enough, this simple approach might work.

@stecman
Copy link
Author

stecman commented Aug 8, 2023

@lcooperdesign If you already own that Photoshop plugin, I'm sure it could be connected to Lightroom through a concoction of scripts. Something like a watched folder a LR plugin exports to, a Photoshop script to run the Premedia plugin and extract the crop information (probably from the action history), then dump cropping information in a file for the LR plugin to scoop up.

@adrienafl
Copy link

Hey @adrienafl, the image analysis part here is a proof-of-concept, not so much ready to use. It's more of a starting point for development with plumbing for getting an external cropping application working with Lightroom through its very limited API.

I'm happy to move this to a repo if it would help people get it running with the right dependency versions. That would let you hack around and define an algorithm that works good enough for your images. The image analysis is the hard part - getting something that works for everyone really needs a completely different pipeline, but if your scans are consistent enough, this simple approach might work.

Cool ! In my case I'm using the Valoi scanning kit / an epson V850 pro. For both scanning methods the operation should be easy since the negative is always surrounded by a black frame. I'll try to make it work in CLI first before moving to a LR plugin.

@lenolib
Copy link

lenolib commented Aug 8, 2023

I'm quite interested in this as well, and I think it could be a great asset for the whole community of people digitizing film (positive and negative in my case). With a library of say 5000 images and maybe 8 seconds to crop each, that is about 8 hours or work.
Having scaffolding and plumbing into Lightroom makes it viable to have part of a pipeline without roundtrips into other tools (=a single final export) which would be great.
I was dabbling a bit myself some years ago trying to do this in python but never got it working robustly enough for satisfaction.
I think with some reasonable assumptions and limits (e.g. the crop rectangle is expected to fill > X% of the image, max rotation <Y degrees) getting to 99% accuracy should be doable.

@adrienafl
Copy link

Hey @stecman did you have time to move this to a repo ? I checked your github profile but I didn't find it.

Thks!

@chbornman
Copy link

chbornman commented Feb 7, 2024

use WSL 2 (ubuntu20.04) import correctly found cv2 but when i try to use plugin - got this error: image

I was getting this same error on MacOS, but I figured out that it couldn't find where "python" was:
https://stackoverflow.com/questions/48484152/os-system-returns-error-code-32512-python

So in the AutoCrop.lua script I changed line 23 from:
local pythonCommand = "python ARGS"

to:
local pythonCommand = "/opt/homebrew/bin/python ARGS"

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment