Skip to content

Instantly share code, notes, and snippets.

Embed
What would you like to do?
Lightroom plugin to calculate image crops using OpenCV

Film negative auto-crop plugin for Lightroom 6

This is a proof of concept plugin for Adobe Lightroom 6 that automatically crops scanned film negatives to only the exposed area of the emulsion using OpenCV.

The detection works, but it could be better. Currently it does a single pass:

  1. Mask out extremely bright points (eg. light coming through the sprocket holes)
  2. Threshold the image starting from zero, increasing in steps
  3. At each threshold, collect the rotated bounding rectangle around the largest contour/blob (larger than a minimum size)
  4. Once the largest contour/blob is too large, stop collecting rects
  5. Calculate the crop for the image using the median of the collected rectangles

This works most of the time, but fails on images that threshold to many smaller contours that don't join (eg. one in each corner).

Images of this running can be seen on Hackaday.io. Note that some modifications have been made since the Hackaday post. The exact code demoed in the post can be seen here.

Setup

Running on Windows (rough)

  • OpenCV and Python installed in the Windows 10 Linux Subsystem (because Python+OpenCV natively on Windows is a pain to set up)
  • Xming (X server for Windows) running to allow windows from the Python script

Running on OSX

The easiest way to install OpenCV at the time of writing is through Homebrew:

# Install OpenCV 3.x with Python Bindings
brew install opencv@3

# Let Python know about the OpenCV bindings
for dir in $(find $(brew --prefix opencv@3)/lib -maxdepth 2 -name 'site-packages'); do \
    _pythonVersion=$(basename $(dirname "$dir"))
    _pathfile="/usr/local/lib/$_pythonVersion/site-packages/opencv3.pth"; \
    echo "Adding $_pathfile"; \
    echo "$dir" > "$_pathfile"; \
done

# Check it worked
python -c 'import cv2' && echo 'OK!'

Then clone this Gist into your Lightroom plugin folder:

cd "$HOME/Library/Application Support/Adobe/Lightroom/Modules/"
git clone https://gist.github.com/91cb5d28d330550a1dc56fa29215cb85.git AutoCrop.lrplugin

Restart Lightroom and you should now see "Negative Auto Crop" listed under File -> Plug-in Manager. Use File -> Plug-in Extras -> Auto Crop Negative to run the script.

Notes

It's easiest to hack on the Python script by running it directly with a test image, rather than running it through Lightroom. Running from Lightroom is slower and you'll only see an exit code if the script has a problem.

The Python and Lua components of this are independent; you can switch the Python script out for any external program, as long as it writes the same data out for Lightroom.

Communication between Lua and Python

The Lightroom API doesn't provide a way to read any output stream from a subprocess, so the crop data computed in Python is written to a text file and picked up by the Lua plugin.

The format of this file is five numbers separated by new lines. The first four numbers are edge positions in the range 0.0 to 1.0 (factors of the image dimension). The last number is the rotation/straightening angle in the range -45.0 to 45.0:

Left edge
Right edge
Top edge
Bottom edge
Rotation

In practice this looks like:

0.027
0.974
0.03333333333333333
0.982
-0.1317138671875

These numbers are always relative to the exported image's orientation. The Lua side handles any rotation needed to match the internal orientation of the image in Lightroom.

Lightroom's Lua API

Lightroom's API is very poorly documented (unless I'm missing some newer docs that Adobe has locked away behind a login). It doesn't appear to be intended for anything other than exporting to custom APIs - seems strange considering how extensible Photoshop is with scripts and plugins.

Images can be cropped through the Lightroom Lua API using the parameters CropLeft, CropRight, CropTop, and CropBottom. These aren't listed on the LrDevelopController page of the SDK docs, but are listed in the docs under LrPhoto:getDevelopSettings. Note that the sides (top, right, etc) are always relative to the orientation AB, not necessarily the top, right, etc of the exported image.

The orientation param is a two character string that represents the two corners at the top of the image:

AB:         BC:       CD:         DA:

A-----B     B---C     C-----D     D---A
|     |     |   |     |     |     |   |
D-----C     |   |     B-----A     |   |
            A---D                 C---B

(Each of these is rotated anti-clockwise by 90 degrees)

In my testing, orientation couldn't be read using LrDevelopController:getValue(), but I could retrieve it using LrPhoto:photo:getDevelopSettings.

-- LR imports
local LrApplication = import("LrApplication")
local LrApplicationView = import("LrApplicationView")
local LrBinding = import("LrBinding")
local LrDevelopController = import("LrDevelopController")
local LrDialogs = import("LrDialogs")
local LrExportSession = import("LrExportSession")
local LrFileUtils = import("LrFileUtils")
local LrFunctionContext = import("LrFunctionContext")
local LrLogger = import("LrLogger")
local LrPathUtils = import("LrPathUtils")
local LrProgressScope = import("LrProgressScope")
local LrTasks = import("LrTasks")
local log = LrLogger("AutoCrop")
log:enable("logfile")
-- Global settings
local scriptPath = LrPathUtils.child(_PLUGIN.path, "detect.py")
-- Template string to run Python scripts
-- (You may need to modify this to point to the right Python binary)
local pythonCommand = "python __ARGS__"
if WIN_ENV then
-- Run Python through the Linux sub-system on Windows
pythonCommand = "bash -c 'DISPLAY=:0 python __ARGS__'"
end
-- Create directory to save temporary exports to
local imgPreviewPath = LrPathUtils.child(_PLUGIN.path, "render")
if LrFileUtils.exists(imgPreviewPath) ~= true then
LrFileUtils.createDirectory(imgPreviewPath)
end
local catalog = LrApplication.activeCatalog()
function setCrop(photo, angle, cropLeft, cropRight, cropTop, cropBottom)
if LrApplicationView.getCurrentModuleName() == "develop" and photo == catalog:getTargetPhoto() then
LrDevelopController.setValue("CropConstrainAspectRatio", false)
LrDevelopController.setValue("straightenAngle", angle)
LrDevelopController.setValue("CropLeft", cropLeft)
LrDevelopController.setValue("CropRight", cropRight)
LrDevelopController.setValue("CropTop", cropTop)
LrDevelopController.setValue("CropBottom", cropBottom)
else
local settings = {}
settings.CropConstrainAspectRatio = false
settings.CropLeft = cropLeft
settings.CropRight = cropRight
settings.CropTop = cropTop
settings.CropBottom = cropBottom
settings.CropAngle = -angle
photo:applyDevelopSettings(settings)
end
end
-- Convert a Windows absolute path to a Linux Sub-Sytem path
function fixPath(winPath)
-- Do nothing on OSX
if MAC_ENV then
return winPath
end
-- Replace Windows drive with mount point in Linux subsystem
local path = winPath:gsub("^(.+):", function(c)
return "/mnt/" .. c:lower()
end)
-- Flip slashes the right way
return path:gsub("%\\", "/")
end
-- Given a string delimited by whitespace, split into numbers
function splitLinesToNumbers(data)
result = {}
for val in string.gmatch(data, "%S+") do
result[#result+1] = tonumber(val)
end
return result
end
function rotateCropForOrientation(crop, orientation)
if orientation == "AB" then
-- No adjustments needed: this is the orientation of the data
return rawCrop
elseif orientation == "BC" then
return {
right = crop.bottom,
bottom = 1 - crop.left,
left = crop.top,
top = 1 - crop.right,
angle = crop.angle,
}
elseif orientation == "CD" then
return {
bottom = 1 - crop.top,
left = 1 - crop.right,
top = 1 - crop.bottom,
right = 1 - crop.left,
angle = crop.angle,
}
elseif orientation == "DA" then
return {
left = 1 - crop.bottom,
top = crop.left,
right = 1 - crop.top,
bottom = crop.right,
angle = crop.angle,
}
end
end
function processPhotos(photos)
LrFunctionContext.callWithContext("export", function(exportContext)
local progressScope = LrDialogs.showModalProgressDialog({
title = "Auto negative crop",
caption = "Analysing image with OpenCV",
cannotCancel = false,
functionContext = exportContext
})
local exportSession = LrExportSession({
photosToExport = photos,
exportSettings = {
LR_collisionHandling = "rename",
LR_export_bitDepth = "8",
LR_export_colorSpace = "sRGB",
LR_export_destinationPathPrefix = imgPreviewPath,
LR_export_destinationType = "specificFolder",
LR_export_useSubfolder = false,
LR_format = "JPEG",
LR_jpeg_quality = 1,
LR_minimizeEmbeddedMetadata = true,
LR_outputSharpeningOn = false,
LR_reimportExportedPhoto = false,
LR_renamingTokensOn = true,
LR_size_doConstrain = true,
LR_size_doNotEnlarge = true,
LR_size_maxHeight = 1500,
LR_size_maxWidth = 1500,
LR_size_units = "pixels",
LR_tokens = "{{image_name}}",
LR_useWatermark = false,
}
})
local numPhotos = exportSession:countRenditions()
local renditionParams = {
progressScope = progressScope,
renderProgressPortion = 1,
stopIfCanceled = true,
}
for i, rendition in exportSession:renditions(renditionParams) do
-- Stop processing if the cancel button has been pressed
if progressScope:isCanceled() then
break
end
-- Common caption for progress bar
local progressCaption = rendition.photo:getFormattedMetadata("fileName") .. " (" .. i .. "/" .. numPhotos .. ")"
progressScope:setPortionComplete(i - 1, numPhotos)
progressScope:setCaption("Processing " .. progressCaption)
rendition:waitForRender()
local photoPath = rendition.destinationPath
local dataPath = photoPath .. ".txt"
-- Build a command line to run a Python script on the exported image
local cmd = pythonCommand:gsub("__ARGS__", '"' .. fixPath(scriptPath) .. '" "' .. fixPath(photoPath) .. '"')
log:trace("Executing: " .. cmd)
exitCode = LrTasks.execute(cmd)
if exitCode ~= 0 then
LrDialogs.showError("The Python script exited with a non-zero status: " .. exitCode .. "\n\nCommand line was:\n" .. cmd )
break
end
if LrFileUtils.exists(dataPath) == false then
LrDialogs.showError("The Python script exited cleanly, but the output data file was not found:\n\n" .. dataPath)
break
end
-- Read crop points from analysis output
-- The directions/sides here are relative to the image that was processed
rawData = LrFileUtils.readFile(dataPath)
cropData = splitLinesToNumbers(rawData)
rawCrop = {
left = cropData[1],
right = cropData[2],
top = cropData[3],
bottom = cropData[4],
angle = cropData[5],
}
-- Re-orient cropping data to "AB" so the crop is applied as intended
-- (Crop is always relative to the "AB" orientation in Lightroom)
developSettings = rendition.photo:getDevelopSettings()
crop = rotateCropForOrientation(rawCrop, developSettings["orientation"])
LrTasks.startAsyncTask(function()
catalog:withWriteAccessDo("Apply crop", function(context)
setCrop(
rendition.photo,
crop.angle,
crop.left,
crop.right,
crop.top,
crop.bottom
)
end, {
timeout = 2
})
end)
LrFileUtils.delete(photoPath)
LrFileUtils.delete(dataPath)
end
end)
end
-- Collect photos to operate on
local targetPhotos = {}
if LrApplicationView.getCurrentModuleName() == "develop" then
targetPhotos[1] = catalog.targetPhoto
elseif LrApplicationView.getCurrentModuleName() == "library" then
targetPhotos = catalog.targetPhotos
end
-- Run autocrop
LrTasks.startAsyncTask(function()
-- Reset all crops so the exports can be processed properly
LrDevelopController.resetCrop()
-- Process crops externally and apply
processPhotos(targetPhotos)
end)
return {}
from __future__ import print_function
import cv2
import copy
import math
import numpy as np
import os
import sys
# Detect OpenCV 2.x vs 3.x
from pkg_resources import parse_version
IS_OPENCV_2 = parse_version(cv2.__version__) < parse_version('3.0.0')
# Alias BoxPoints as this lives in a different place in OpenCV 2 and 3
if IS_OPENCV_2:
BoxPoints = cv2.cv.BoxPoints
else:
BoxPoints = cv2.boxPoints
# Detection settings
MAX_COVERAGE = 0.98
INSET_PERCENT = 0.005
def thresholdImage(img, lowerThresh, ignoreMask):
_, binary = cv2.threshold(img, lowerThresh, 255, cv2.THRESH_BINARY_INV) # THRESH_TOZERO_INV
# binary = cv2.bitwise_not(binary)
binary = cv2.bitwise_and(ignoreMask, binary)
# Prevent tiny outlier collections of pixels spoiling the rect fitting
kernel = np.ones((5,5),np.uint8)
binary = cv2.dilate(binary, kernel, iterations = 3)
binary = cv2.erode(binary, kernel, iterations = 3)
return binary
def findLargestContourRect(binary):
largestRect = None
largestArea = 0
# Find external contours of all shapes
if IS_OPENCV_2:
contours,_ = cv2.findContours(binary, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
else:
_, contours, hierarchy = cv2.findContours(binary, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
for cnt in contours:
area = cv2.contourArea(cnt)
# Keep track of the largest area seen
if area > largestArea:
largestArea = area
largestRect = cv2.minAreaRect(cnt)
return largestRect, largestArea
def findNonZeroPixelsRect(binary):
edges = copy.copy(binary)
if IS_OPENCV_2:
contours,_ = cv2.findContours(edges, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
else:
_, contours, hierarchy = cv2.findContours(binary, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
nonZero = cv2.findNonZero(edges)
if nonZero is None:
return None, 0, binary
rect = cv2.minAreaRect(nonZero)
area = rect[1][0] * rect[1][1]
return rect, area
def normaliseRectRotation(rawRects):
"""
Normalize rect orientation to have an angle between -45 and 45 degrees
Rects generated by OpenCV are can be "portrait" with a near -90 angle and flipped height/width
To combine and compare rects meaningfully, they need to all have the same orientation.
"""
rects = []
for rect in rawRects:
center = rect[0]
size = rect[1]
angle = rect[2]
if angle < -45:
rect = (
center,
(size[1], size[0]),
angle + 90
)
rects.append(rect)
return rects
def medianRect(rects):
if len(rects) == 0:
return None
rects = normaliseRectRotation(rects)
# Sort rects by area
rects.sort(key=lambda rect: rect[1][0] * rect[1][1])
median = (
(np.median([r[0][0] for r in rects]), np.median([r[0][1] for r in rects])),
(np.median([r[1][0] for r in rects]), np.median([r[1][1] for r in rects])),
np.median([r[2] for r in rects])
)
return median
def correctAspectRatio(rect, targetRatio = 1.5, maxDifference = 0.3):
"""
Return an aspect-ratio corrected rect (and success flag)
Args:
rect (OpenCV RotatedRect struct)
targetRatio (float): Ratio represented as the larger image dimension divided by the smaller one
"""
# Indexes into the rect nested tuple
CENTER = 0; SIZE = 1; ANGLE = 2
X = 0; Y = 1;
size = rect[SIZE]
aspectRatio = max(size[X], size[Y]) / float(min(size[X], size[Y]))
aspectError = targetRatio - aspectRatio
# Factor out orientation to simplify logic below
# This assumes the larger dimension as X
if size[X] == max(size[X], size[Y]):
rectWidth = size[X]
rectHeight = size[Y]
widthDim = X
heightDim = Y
else:
rectHeight = size[X]
rectWidth = size[Y]
widthDim = Y
heightDim = X
# Only attempt to correct aspect ratio where the ROI is roughly right already
# This prevents odd results for poor outline detection
if abs(aspectError) > maxDifference:
return rect, False
# Shrink width if the ratio was too wide
if aspectRatio > targetRatio:
print("ratio too large", aspectError)
rectWidth = size[heightDim] * targetRatio
# Shrink height if the ratio was too tall
elif aspectRatio < targetRatio:
print("ratio too small", aspectError)
# rectWidth = size[heightDim] * targetRatio
rectHeight = size[widthDim] / targetRatio
# Apply new width/height in the original orientation
if widthDim == X:
newSize = (rectWidth, rectHeight)
else:
newSize = (rectHeight, rectWidth)
newRect = (rect[CENTER], newSize, rect[ANGLE])
return newRect, True
def findExposureBounds(img, showOutputWindow=False):
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
# Smooth out noise
# gray = cv2.GaussianBlur(gray,(5,5),0)
gray = cv2.bilateralFilter(gray, 11, 17, 17)
# Maximise brightness range
gray = cv2.equalizeHist(gray)
# Create a mask to ignore the brightest spots
# These are usually where there is no film covering the light source
_, ignoreMask = cv2.threshold(gray, 240, 255, cv2.THRESH_BINARY)
# Expand masked out area slightly to include adjacent edges
kernel = np.ones((3,3),np.uint8)
ignoreMask = cv2.dilate(ignoreMask, kernel, iterations = 3)
# Create a mask to ignore areas of low saturation
# When white balanced against the film stock, this is usually low saturation
hsv = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)
hsv = cv2.GaussianBlur(hsv, (5,5), 0)
satMask = cv2.inRange(hsv, (0, 0, 0), (255, 7, 255))
# Combine saturation and brightness masks, then flip
ignoreMask = cv2.bitwise_or(ignoreMask, satMask)
ignoreMask = cv2.bitwise_not(ignoreMask)
# Get min/max region of interest areas
height, width, _ = img.shape
maxArea = (height * MAX_COVERAGE) * (width * MAX_COVERAGE)
minCaptureArea = maxArea * 0.65
# algos = [findNonZeroPixelsRect]
algos = [findLargestContourRect]
results = []
for func in algos:
lowerThreshold = 0
while lowerThreshold < 240:
binary = thresholdImage(gray, lowerThreshold, ignoreMask)
debugImg = cv2.cvtColor(binary, cv2.COLOR_GRAY2BGR)
rect, area = func(binary)
# Stop once a valid result is returned
if area >= maxArea:
break
if area >= minCaptureArea:
results.append(rect)
lowerThreshold += 5
# Draw in green for results that are collected
debugLineColour = (0, 255, 0)
else:
lowerThreshold += 5
# Draw in red for areas that were too small
debugLineColour = (0, 0, 255)
if showOutputWindow:
if rect is not None:
# Get a rectangle around the contour
rectPoints = BoxPoints(rect)
rectPoints = np.int0(rectPoints)
cv2.drawContours(debugImg, [rectPoints], -1, debugLineColour, 3)
# Draw threshold on debug output
cv2.putText(
img=debugImg,
text='Threshold: ' + str(lowerThreshold),
org=(20, 30),
fontFace=cv2.FONT_HERSHEY_PLAIN,
fontScale=2,
color=(0, 150, 255),
lineType=4
)
cv2.imshow('image', cv2.resize(debugImg, (0,0), fx=0.75, fy=0.75) )
cv2.waitKey(1)
return medianRect(results)
if __name__ == '__main__':
import argparse
parser = argparse.ArgumentParser(description='Find crop for film negative scan')
parser.add_argument('files', nargs='+', help='Image files to perform detection on (JPG, PNG, etc)')
args = parser.parse_args()
hasDisplay = os.getenv('DISPLAY') != None
for filename in args.files:
if not os.path.exists(filename):
print("ERROR:")
print("Could not find file '%s'" % filename)
sys.exit(5)
# read image and convert to gray
img = cv2.imread(filename, cv2.IMREAD_UNCHANGED)
# cv2.imshow('image', cv2.resize(img, (0,0), fx=0.75, fy=0.75) )
# cv2.waitKey(0)
rawRect = findExposureBounds(img, showOutputWindow=hasDisplay)
# Outputs for Lightroom
cropLeft = 0
cropRight = 1.0
cropTop = 0
cropBottom = 1.0
rotation = 0
if rawRect is not None:
# Average height and width of the detected area to get a constant inset
insetPixels = ((rawRect[1][0] + rawRect[1][1]) / 2.0) * INSET_PERCENT
insetRect = (
rawRect[0], # Center
(rawRect[1][0] - insetPixels, rawRect[1][1] - insetPixels), # Size
rawRect[2] # Rotation
)
rect, aspectChanged = correctAspectRatio(insetRect)
boxWidth = rect[1][0]
boxHeight = rect[1][1]
box = np.int0(BoxPoints(rect))
# # Create a mask that excludes areas that are probably the directly visible light source
# _, wbMask = cv2.threshold(gray, 253, 0, cv2.THRESH_TOZERO)
# wbMask = cv2.bitwise_not(wbMask)
# # Mask out the detected frame - we only want to look at the base film layer
# cv2.fillConvexPoly(wbMask, box, 0)
# # cv2.imshow('image', wbMask )
# # cv2.waitKey(0)
# # bgr = cv2.mean(img, wbMask)
# lab = cv2.mean(cv2.cvtColor(img, cv2.COLOR_BGR2LAB), wbMask)
# # print [i for i in reversed(bgr)]
# tint = lab[1] - 127
# temperature = lab[2] - 127
# print (lab[0]/255.0)*100, temperature, tint
# Lightroom doesn't support rotation more than 45 degrees
# The detected rect usually includes a 90 degree rotation for landscape images
rotation = -rect[2]
if rotation > 45:
rotation -= 90
elif rotation < -90:
rotation += 45
# Calculate crops in a format for Lightroom (0.0 to 1.0 for each edge)
centerX = rect[0][0]
centerY = rect[0][1]
# Use the average distance from each side as the crop in Lightroom
imgHeight, imgWidth, _ = img.shape
top = []; left = []; right = []; bottom =[]
for point in box:
# point = rotateAroundPoint(point, math.radians(rotation))
if point[0] > centerX:
right.append( point[0] )
else:
left.append( point[0] )
if point[1] > centerY:
bottom.append( point[1] )
else:
top.append( point[1] )
cropRight = (min(right)) / float(imgWidth)
cropLeft = (max(left)) / float(imgWidth)
cropBottom = (min(bottom)) / float(imgHeight)
cropTop = (max(top)) / float(imgHeight)
# Draw original detected area
rawBox = np.int0(BoxPoints(rawRect))
cv2.drawContours(img, [rawBox], -1, (255, 0, 0), 1)
# Draw inset area
insetBox = np.int0(BoxPoints(insetRect))
cv2.drawContours(img, [insetBox], -1, (0, 255, 255), 1)
# Draw adjusted aspect ratio area
cv2.drawContours(img, [box], -1, (0, 255, 0), 2)
cv2.circle(img, (int(rect[0][0]), int(rect[0][1])), 3, (0, 255, 0), 3)
# Write result to disk for Lightroom plugin to pick up
# (The Lightroom API doesn't appear to allow streaming in output from a program)
cropData = [
cropLeft,
cropRight,
cropTop,
cropBottom,
rotation
]
for v in cropData:
print(v)
with open(filename + ".txt", 'w') as out:
out.write("\r\n".join(str(x) for x in cropData))
cv2.imwrite(filename + "-analysis.jpg", img)
# if hasDisplay:
# cv2.imshow('image', cv2.resize(img, (0,0), fx=0.75, fy=0.75) )
# cv2.waitKey(0)
return {
LrSdkVersion = 6.0,
LrSdkMinimumVersion = 6.0,
LrToolkitIdentifier = 'nz.co.stecman.negativeautocrop',
LrPluginName = "Negative Auto Crop",
LrExportMenuItems = {
{
title = "Auto &Crop Negative",
file = "AutoCrop.lua",
enabledWhen = "photosSelected"
}
},
VERSION = {
major=1,
minor=0,
revision=0,
}
}
@Herbert1630

This comment has been minimized.

Copy link

@Herbert1630 Herbert1630 commented Dec 7, 2019

Hi ! thank you this is very useful. I need to crops a massive number of pictures (i would say around 29.000).
I have no knowledge about computers and coding, but I managed to install the plug in. Now in LR I have this error: 32512
What should I do? What's the likely cause?
thank you!

@stecman

This comment has been minimized.

Copy link
Owner Author

@stecman stecman commented Dec 8, 2019

@Herbert1630 unfortunately the Lightroom error codes aren't very meaningful, but I assume the call to run the Python script is failing for some reason. There should be an AutoCrop.log file in your Documents folder that lists the command it executed:

Executing: bash -c 'DISPLAY=:0 python "/mnt/c/Users/Stecman/Documents/Lightroom/NegativeAutoCrop.lrplugin/detect.py" "/mnt/c/Users/Stecman/Documents/Lightroom/NegativeAutoCrop.lrplugin/render/_MG_2398.jpg"' 

If you copy everything after Executing: and run it in Powershell on Windows or Terminal on Mac OSX, you'll get a proper error message that says what the problem is.

Do note that while this setup works, it really is just a proof of concept at this stage and needs further work to apply reliably - especially at the scale you're looking at. If you want to reach out to the email on my Github profile, I'd be happy to work with you to get this to a more generally usable state.

@mngyuan

This comment has been minimized.

Copy link

@mngyuan mngyuan commented Jun 8, 2020

Any chance of making this work for LR CC? I've got the script running but it just toggles the crop tool on and closes it without doing anything.

@stecman

This comment has been minimized.

Copy link
Owner Author

@stecman stecman commented Jun 9, 2020

Have you tested the Python script is giving expected results outside of Lightroom, @mngyuan?

I haven't tested this with LR CC, but it sounds like the Lua side is working at least if it's getting to the cropping stage. Since you're not seeing any error dialogs, I suspect the plugin may be working but the crop returned from the Python script is 0, 1, 0, 1 (ie. no crop).

@mngyuan

This comment has been minimized.

Copy link

@mngyuan mngyuan commented Jun 11, 2020

@stecman you're correct, I'm getting the following output

~  ➜  /usr/local/bin/python "/Users/phorust/Downloads/AutoCrop.lrplugin/detect.py" "/Users/phorust/Downloads/AutoCrop.lrplugin/render/HOMED_00107.jpg-analysis.jpg"
0
1.0
0
1.0
0

I thought potentially it was because of the nature of the scan (my scans have the negative holder in frame), or because I had already converted it into a positive through Negative Lab Pro, but neither of those things seem to have an effect.

I may dive in deeper later if I have time ..., so any pointers on where to start are appreciated!

@stecman

This comment has been minimized.

Copy link
Owner Author

@stecman stecman commented Jun 11, 2020

@mngyuan this example is tuned pretty specifically for my negative captures with emulsion to the edges of the frame, so it will likely need some changes for other setups. Currently I don't think it will work for positives as it tries to find a dark area (exposure) inside a lighter one (unexposed part of the film).

A few pointers:

  • You should see a preview window showing the detection as it's running. You may have to force hasDisplay to True on OSX if you're not seeing this. With the preview it should be fairly evident where it's going wrong.

    cv2.imshow calls like the one commented out at the end are useful for inspecting an image at various points.

  • There's two detection methods in the python script (see the algos variable). One of them may work better for your images:

    • findLargestContourRect: Finds the largest contiguous blob of pixels in a thresholded image and returns its bounds.
    • findNonZeroPixelsRect: Returns bounds that contain all white pixels in a thresholded image.
  • Don't worry about the LR side until the Python script is doing what you want - it's just some glue to push cropping data into Lightroom and makes debugging difficult.

@JackyChiu

This comment has been minimized.

Copy link

@JackyChiu JackyChiu commented Sep 14, 2020

Thanks for making this! It works pretty well with scans done without a film strip holder since you get more borders.

I'm on macOS and was getting the error The Python script exited with a non-zero status: 256 .... Like the lua script comment said I had to change it to the absolute path of my python binary (python3 for me). Just in case anyone else runs into this issue.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.