Skip to content

Instantly share code, notes, and snippets.

@Svidro Svidro/!Complex scripts
Last active Mar 21, 2019

Embed
What would you like to do?
Set of scripts created to help the workflow with (primarily) IF images
TOC
Affine transform objects between images.groovy - used with the Align images experimental tool in v0.2.0m1
Background staining check - Takes annotations, expands an area around them, checks the staining level in that area,
then deletes all expanded areas and any original areas that violate some condition. Can help check for staining artifacts.
Classifier with GUI.groovy - User interface based macro to simplify classifying many possible channels. Also generates all
possible combinations of base classes (double, triple, etc positives).
Classifier with no GUI.groovy - Same as above but streamlined for use in a script, with no user interaction.
*Added Detection based versions of both above scripts which should work for tiles.
Invasion assay or tumor adjacent area.groovy - Creating areas of increasing distance from the tumor annotation border. Use negative
values in the annotation expansion for invasion assays.
Lipid detection and measurement.groovy - Detects lighter areas within your tissue area and creates detection objects with measurements.
Multiple cell detections.groovy - Set of scripts that allow the user to run one cell detection, store those results, run a second cell
detection, and then import the results of the first. Useful when one set of Cell detection variables does not accurately detect all
of your cells.
Positive Pixel scripting for QP 1.2.groovy - Demonstrates ways to succesfully use positive pixel detection to handle difficult staining.
Positive Pixel scripting for QP 1.3.groovy - Same as above but modified for alterations to positive pixel detection in 1.3
Step 1 through Step 4 - Part of a workflow for semiautomated generation of very high resolution cells. User defines the cytoplasm.
https://groups.google.com/forum/#!msg/qupath-users/ehxID096NV8/U7n5_CNABwAJ
Tissue detection (for workflow) - Two scripts that mimic Simple Tissue Detection from QuPath, but give more channel flexibility when
working with fluorescent images. Normally QuPath will only use the first channel for tissue detection, while this will let you choose
the balance between channels that works best for you. Workflow version removes the GUI.
https://groups.google.com/forum/#!topic/qupath-users/4g26bLOC_CE
Tumor Region Measurements - Script is from Pete and can be used for measurements in and around a tumor.
https://petebankhead.github.io/qupath/scripts/2018/08/08/three-regions.html
Updated Jan 2019 with Classifier scripts to make it easier to... well, classify. https://groups.google.com/forum/#!topic/qupath-users/LMxYihQMvTw
Updated Dec 2018 with a Tissue Detection script that can act in a similar (rough) fashion to simple tissue detection, but has the
advantage of allowing the user to choose and weight channels. This makes it possible to look at specific areas within tissue
samples, even in 7-8 color images. See here for examples and an explanation: https://groups.google.com/forum/#!topic/qupath-users/4g26bLOC_CE
/** QUPATH 0.2.0m1
* Script to transfer QuPath objects from one image to another, applying an AffineTransform to any ROIs.
* https://forum.image.sc/t/interactive-image-alignment/23745/8
*/
// SET ME! Define transformation matrix
// Get this from 'Interactive image alignment (experimental)
def matrix = [
-0.998, -0.070, 127256.994,
0.070, -0.998, 72627.371
]
// SET ME! Define image containing the original objects (must be in the current project)
def otherImageName = null
// SET ME! Delete existing objects
def deleteExisting = true
// SET ME! Change this if things end up in the wrong place
def createInverse = true
import qupath.lib.gui.helpers.DisplayHelpers
import qupath.lib.objects.PathCellObject
import qupath.lib.objects.PathDetectionObject
import qupath.lib.objects.PathObject
import qupath.lib.objects.PathObjects
import qupath.lib.objects.PathTileObject
import qupath.lib.roi.PathROIToolsAwt
import qupath.lib.roi.interfaces.ROI
import java.awt.geom.AffineTransform
import static qupath.lib.gui.scripting.QPEx.*
if (otherImageName == null) {
DisplayHelpers.showErrorNotification("Transform objects", "Please specify an image name in the script!")
return
}
// Get the project & the requested image name
def project = getProject()
def entry = project.getImageList().find {it.getImageName() == otherImageName}
if (entry == null) {
print 'Could not find image with name ' + otherImageName
return
}
def otherHierarchy = entry.readHierarchy()
def pathObjects = otherHierarchy.getRootObject().getChildObjects()
// Define the transformation matrix
def transform = new AffineTransform(
matrix[0], matrix[3], matrix[1],
matrix[4], matrix[2], matrix[5]
)
if (createInverse)
transform = transform.createInverse()
if (deleteExisting)
clearAllObjects()
def newObjects = []
for (pathObject in pathObjects) {
newObjects << transformObject(pathObject, transform)
}
addObjects(newObjects)
print 'Done!'
/**
* Transform object, recursively transforming all child objects
*
* @param pathObject
* @param transform
* @return
*/
PathObject transformObject(PathObject pathObject, AffineTransform transform) {
// Create a new object with the converted ROI
def roi = pathObject.getROI()
def roi2 = transformROI(roi, transform)
def newObject = null
if (pathObject instanceof PathCellObject) {
def nucleusROI = pathObject.getNucleusROI()
if (nucleusROI == null)
newObject = PathObjects.createCellObject(roi2, pathObject.getPathClass(), pathObject.getMeasurementList())
else
newObject = PathObjects.createCellObject(roi2, transformROI(nucleusROI, transform), pathObject.getPathClass(), pathObject.getMeasurementList())
} else if (pathObject instanceof PathTileObject) {
newObject = PathObjects.createTileObject(roi2, pathObject.getPathClass(), pathObject.getMeasurementList())
} else if (pathObject instanceof PathDetectionObject) {
newObject = PathObjects.createDetectionObject(roi2, pathObject.getPathClass(), pathObject.getMeasurementList())
} else {
newObject = PathObjects.createAnnotationObject(roi2, pathObject.getPathClass(), pathObject.getMeasurementList())
}
// Handle child objects
if (pathObject.hasChildren()) {
newObject.addPathObjects(pathObject.getChildObjects().collect({transformObject(it, transform)}))
}
return newObject
}
/**
* Transform ROI (via conversion to Java AWT shape)
*
* @param roi
* @param transform
* @return
*/
ROI transformROI(ROI roi, AffineTransform transform) {
def shape = PathROIToolsAwt.getShape(roi) // Should be able to use roi.getShape() - but there's currently a bug in it for rectangles/ellipses!
shape2 = transform.createTransformedShape(shape)
return PathROIToolsAwt.getShapeROI(shape2, roi.getC(), roi.getZ(), roi.getT(), 0.5)
}
//Ideally checks one channel for presence above a certain background level, to help remove islets or areas of interest in areas of bad staining
//Could be modified to check for stain bubbles, edge staining artifacts, multiple channels, etc.
//RESETS ANY CLASSIFICATIONS ALREADY SET. Would require substantial revision to avoid reclassifying annotations.
//expansion distance in microns around the annotations that is checked for background, this is also a STRING
def expansion = "20.0"
def threshold = 5000
//channel variable is part of a String and needs to be exactly correct
def channel = "Channel 2"
import qupath.lib.roi.*
import qupath.lib.objects.*
def pixelSize = getCurrentImageData().getServer().getPixelHeightMicrons()
hierarchy = getCurrentHierarchy()
originals = getAnnotationObjects()
classToSubtract = "Original"
surroundingClass = "Surrounding"
areaClass = "Donut"
//set the class on all of the base objects, lots of objects will be created and this helps keep track.
originals.each{it.setPathClass(getPathClass(surroundingClass))}
selectAnnotations()
runPlugin('qupath.lib.plugins.objects.DilateAnnotationPlugin', '{"radiusMicrons": '+expansion+', "removeInterior": false, "constrainToParent": true}');
originals.each{it.setPathClass(getPathClass(classToSubtract))}
surroundings = getAnnotationObjects().findAll{it.getPathClass() == getPathClass(surroundingClass)}
fireHierarchyUpdate()
for (parent in surroundings){
//child object should be of the original annotations, now with classToSubtract
child = parent.getChildObjects()
updated = PathROIToolsAwt.combineROIs(parent.getROI(), child[0].getROI(), PathROIToolsAwt.CombineOp.SUBTRACT)
// Remove original annotation, add new ones
annotations = new PathAnnotationObject(updated, getPathClass(areaClass))
addObject(annotations)
selectAnnotations().findAll{it.getPathClass() == getPathClass(areaClass)}
///////////MAY NEED TO MANUALLY EDIT THIS LINE and "value" below A BIT BASED ON IMAGE///////////////////
runPlugin('qupath.lib.algorithms.IntensityFeaturesPlugin', '{"pixelSizeMicrons": '+pixelSize+', "region": "ROI", "tileSizeMicrons": 25.0, "channel1": false, "channel2": true, "channel3": false, "channel4": false, "doMean": true, "doStdDev": true, "doMinMax": false, "doMedian": false, "doHaralick": false, "haralickMin": 0, "haralickMax": 0, "haralickDistance": 1, "haralickBins": 32}');
donut = getAnnotationObjects().findAll{it.getPathClass()==getPathClass(areaClass)}
fireHierarchyUpdate()
value = donut[0].getMeasurementList().getMeasurementValue("ROI: 0.32 " + qupath.lib.common.GeneralTools.micrometerSymbol() + " per pixel: "+channel+": Mean")
//occasionally the value is NaN for no reason I can figure out. I decided it was safer to keep the results any time
//this happens for now, though if the preserved regions end up being problematic the && !value.isNaN should be removed.
if ( value > threshold && !value.isNaN()){
println("remove, value was "+value)
removeObject(parent, false)
removeObject(donut[0], true)
} else {println("keep");
removeObject(parent, true);
removeObject(donut[0],true)
}
}
fireHierarchyUpdate()
//V1 Edited slightly to work for tiles/SLICs
import javafx.application.Platform
import javafx.beans.property.SimpleLongProperty
import javafx.geometry.Insets
import javafx.scene.Scene
import javafx.geometry.Pos
import javafx.scene.control.Button
import javafx.scene.control.Label
import javafx.scene.control.TableView
import javafx.scene.control.TextField
import javafx.scene.control.CheckBox
import javafx.scene.control.ComboBox
import javafx.scene.control.TableColumn
import javafx.scene.control.ColorPicker
import javafx.scene.layout.BorderPane
import javafx.scene.layout.GridPane
import javafx.scene.control.Tooltip
import javafx.stage.Stage
import qupath.lib.gui.QuPathGUI
import qupath.lib.gui.helpers.ColorToolsFX;
import javafx.scene.paint.Color;
//Settings to control the dialog boxes for the GUI
int col = 0
int row = 0
int textFieldWidth = 120
int labelWidth = 150
def gridPane = new GridPane()
gridPane.setPadding(new Insets(10, 10, 10, 10));
gridPane.setVgap(2);
gridPane.setHgap(10);
def server = getCurrentImageData().getServer()
//Upper thresholds will default to the max bit depth, since that is likely the most common upper limit for a given image.
maxPixel = Math.pow((double) 2,(double)server.getBitsPerPixel())-1
positive = []
//print(maxPixel)
def titleLabel = new Label("Intended for use where one marker determines a base class.\nFor example, you could use Channel 1 Cytoplasmic Mean and Channel 2 Nuclear Mean\nto generate two base classes and a Double positive class where each condition is true.\n\n")
gridPane.add(titleLabel,col, row++, 3, 1)
def requestLabel = new Label("How many base classes/single measurements are you interested in?\nThe above example would have two.\n")
gridPane.add(requestLabel,col, row++, 3, 1)
def TextField classText = new TextField("2");
classText.setMaxWidth( textFieldWidth);
classText.setAlignment(Pos.CENTER_RIGHT)
gridPane.add(classText, col++, row, 1, 1)
//ArrayList<Label> channelLabels
Button startButton = new Button()
startButton.setText("Start Classifying")
gridPane.add(startButton, col, row++, 1, 1)
startButton.setTooltip(new Tooltip("If you need to change the number of classes, re-run the script"));
col = 0
row+=10 //spacer
def loadLabel = new Label("Load a classifier:")
gridPane.add(loadLabel,col++, row, 2, 1)
def TextField classFile = new TextField("MyClassifier");
classFile.setMaxWidth( textFieldWidth);
classFile.setAlignment(Pos.CENTER_RIGHT)
gridPane.add( classFile, col++, row, 1, 1)
Button loadButton = new Button()
loadButton.setText("Load Classifier")
gridPane.add(loadButton, col++, row++, 1, 1)
//incredibly lazy and sloppy coding, just a copy and paste taking slightly different inputs
loadButton.setOnAction{
path = buildFilePath(PROJECT_BASE_DIR, "classifiers",classFile.getText())
new File(path).withObjectInputStream {
cObj = it.readObject()
}
//Create an arraylist with the same number of entries as classes
CHANNELS = cObj.size()
col = 0
row = 0
def secondGridPane = new GridPane()
secondGridPane.setPadding(new Insets(10, 10, 10, 10));
secondGridPane.setVgap(2);
secondGridPane.setHgap(10);
def assist = new Label("Short Class Names are recommended as dual positives and beyond use the full names of all positive classes.\n ")
secondGridPane.add(assist, col, row++, 5, 1)
def mChoice = new Label("Measurement ")
mChoice.setMaxWidth(400)
mChoice.setAlignment(Pos.CENTER_RIGHT)
def mLowThresh = new Label("Lower Threshold <= ")
def mHighThresh = new Label("<= Upper Threshold ")
def mClassName = new Label("Class Name ")
secondGridPane.add( mChoice, col++, row, 1,1)
secondGridPane.add( mLowThresh, col++, row, 1,1)
secondGridPane.add( mClassName, col++, row, 1,1)
secondGridPane.add( mHighThresh, col, row++, 1,1)
//create data structures to use for building the classifier
boxes = new ComboBox [CHANNELS]
lowerTs = new TextField [CHANNELS]
classList = new TextField [CHANNELS]
upperTs = new TextField [CHANNELS]
colorPickers = new ColorPicker [CHANNELS]
//create the dialog where the user will select the measurements of interest and values
for (def i=0; i<CHANNELS;i++) {
col =0
//Add to dialog box, new row for each
boxes[i] = new ComboBox()
qupath.lib.classifiers.PathClassificationLabellingHelper.getAvailableFeatures(getDetectionObjects()).each {boxes[i].getItems().add(it) }
boxes[i].setValue(cObj[i][0])
classList[i] = new TextField(cObj[i][2])
lowerTs[i] = new TextField(cObj[i][1])
upperTs[i] = new TextField(cObj[i][3])
classList[i].setMaxWidth( textFieldWidth);
classList[i].setAlignment(Pos.CENTER_RIGHT)
lowerTs[i].setMaxWidth( textFieldWidth);
lowerTs[i].setAlignment(Pos.CENTER_RIGHT)
upperTs[i].setMaxWidth( textFieldWidth);
upperTs[i].setAlignment(Pos.CENTER_RIGHT)
colorPickers[i] = new ColorPicker(Color.web(cObj[i][4]))
secondGridPane.add(boxes[i], col++, row, 1,1)
secondGridPane.add(lowerTs[i], col++, row, 1,1)
secondGridPane.add(classList[i], col++, row, 1, 1)
secondGridPane.add(upperTs[i], col++, row, 1,1)
secondGridPane.add(colorPickers[i], col++, row++, 1,1)
}
Button runButton = new Button()
runButton.setText("Run Classifier")
secondGridPane.add(runButton, 0, row++, 1, 1)
//All stuff for actually classifying cells
runButton.setOnAction {
//set up for classifier
def cells = getDetectionObjects()
cells.each {it.setPathClass(getPathClass('Negative'))}
startTime = System.currentTimeMillis()
//start classifier with all cells negative
for (def i=0; i<CHANNELS; i++){
def lower = Float.parseFloat(lowerTs[i].getText())
def upper = Float.parseFloat(upperTs[i].getText())
//create lists for each measurement, classify cells based off of those measurements
positive[i] = cells.findAll {measurement(it, boxes[i].getValue()) >= lower && measurement(it, boxes[i].getValue()) <= upper}
positive[i].each {it.setPathClass(getPathClass(classList[i].getText()+' positive')); it.getMeasurementList().putMeasurement("ClassDepth", 1)}
c = colorPickers[i].getValue()
currentPathClass = getPathClass(classList[i].getText()+' positive')
//for some reason setColor needs to be used here instead of setColorRGB which applies to objects and not classes?
currentPathClass.setColor(ColorToolsFX.getRGBA(c))
}
//Call the classifier on each list of positive single class cells, except for the last one!
for (def i=0; i<(CHANNELS-1); i++){
println("ROUND "+i)
int remaining = 0
for (def j = i+1; j<CHANNELS; j++){
remaining +=1
}
//println("SENDING CELLS TO CLASSIFIER "+positive[i].size())
depth = 2
classifier(classList[i].getText(), positive[i], remaining, i)
}
println("clasifier done")
fireHierarchyUpdate()
}
//end Run Button
row+=10 //spacer
Button saveButton = new Button()
saveButton.setText("Save Classifier")
secondGridPane.add(saveButton, 1, row, 1, 1)
def TextField saveFile = new TextField("MyClassifier");
saveFile.setMaxWidth( textFieldWidth);
saveFile.setAlignment(Pos.CENTER_RIGHT)
secondGridPane.add( saveFile, 2, row++, 1, 1)
//All stuff for actually classifying cells
saveButton.setOnAction {
def export = []
for (def l=0; l<CHANNELS;l++){
export << [boxes[l].getValue(), lowerTs[l].getText(), classList[l].getText(), upperTs[l].getText(), colorPickers[l].getValue().toString()]
}
mkdirs(buildFilePath(PROJECT_BASE_DIR, "classifiers"))
path = buildFilePath(PROJECT_BASE_DIR, "classifiers",saveFile.getText())
new File(path).withObjectOutputStream {
it.writeObject(export)
}
}
//End of classifier window
Platform.runLater {
def stage3 = new Stage()
stage3.initOwner(QuPathGUI.getInstance().getStage())
stage3.setScene(new Scene( secondGridPane))
stage3.setTitle("Loaded Classifier "+classFile.getText())
stage3.setWidth(870);
stage3.setHeight(900);
//stage.setResizable(false);
stage3.show()
}
}
//end of the loaded classifier
startButton.setOnAction {
col = 0
row = 0
//Create an arraylist with the same number of entries as classes
CHANNELS = Float.parseFloat(classText.getText())
//channelLabels = new ArrayList( CHANNELS)
def secondGridPane = new GridPane()
secondGridPane.setPadding(new Insets(10, 10, 10, 10));
secondGridPane.setVgap(2);
secondGridPane.setHgap(10);
def assist = new Label("Short Class Names are recommended as dual positives and beyond use the full names of all positive classes.\n ")
secondGridPane.add(assist, col, row++, 5, 1)
def mChoice = new Label("Measurement ")
mChoice.setMaxWidth(400)
mChoice.setAlignment(Pos.CENTER_RIGHT)
def mLowThresh = new Label("Lower Threshold <= ")
def mHighThresh = new Label("<= Upper Threshold ")
def mClassName = new Label("Class Name ")
secondGridPane.add( mChoice, col++, row, 1,1)
secondGridPane.add( mLowThresh, col++, row, 1,1)
secondGridPane.add( mClassName, col++, row, 1,1)
secondGridPane.add( mHighThresh, col, row++, 1,1)
//create data structures to use for building the classifier
boxes = new ComboBox [CHANNELS]
lowerTs = new TextField [CHANNELS]
classList = new TextField [CHANNELS]
upperTs = new TextField [CHANNELS]
colorPickers = new ColorPicker [CHANNELS]
//create the dialog where the user will select the measurements of interest and values
for (def i=0; i<CHANNELS;i++) {
col =0
//Add to dialog box, new row for each
boxes[i] = new ComboBox()
qupath.lib.classifiers.PathClassificationLabellingHelper.getAvailableFeatures(getDetectionObjects()).each {boxes[i].getItems().add(it) }
classList[i] = new TextField("C" + (i+1))
lowerTs[i] = new TextField("0")
upperTs[i] = new TextField(maxPixel.toString())
classList[i].setMaxWidth( textFieldWidth);
classList[i].setAlignment(Pos.CENTER_RIGHT)
lowerTs[i].setMaxWidth( textFieldWidth);
lowerTs[i].setAlignment(Pos.CENTER_RIGHT)
upperTs[i].setMaxWidth( textFieldWidth);
upperTs[i].setAlignment(Pos.CENTER_RIGHT)
colorPickers[i] = new ColorPicker()
secondGridPane.add(boxes[i], col++, row, 1,1)
secondGridPane.add(lowerTs[i], col++, row, 1,1)
secondGridPane.add(classList[i], col++, row, 1, 1)
secondGridPane.add(upperTs[i], col++, row, 1,1)
secondGridPane.add(colorPickers[i], col++, row++, 1,1)
}
Button runButton = new Button()
runButton.setText("Run Classifier")
secondGridPane.add(runButton, 0, row++, 1, 1)
//All stuff for actually classifying cells
runButton.setOnAction {
//set up for classifier
def cells = getDetectionObjects()
cells.each {it.setPathClass(getPathClass('Negative'))}
//start classifier with all cells negative
for (def i=0; i<CHANNELS; i++){
def lower = Float.parseFloat(lowerTs[i].getText())
def upper = Float.parseFloat(upperTs[i].getText())
//create lists for each measurement, classify cells based off of those measurements
positive[i] = cells.findAll {measurement(it, boxes[i].getValue()) >= lower && measurement(it, boxes[i].getValue()) <= upper}
positive[i].each {it.setPathClass(getPathClass(classList[i].getText()+' positive')); it.getMeasurementList().putMeasurement("ClassDepth", 1)}
c = colorPickers[i].getValue()
currentPathClass = getPathClass(classList[i].getText()+' positive')
//for some reason setColor needs to be used here instead of setColorRGB which applies to objects and not classes?
currentPathClass.setColor(ColorToolsFX.getRGBA(c))
}
//Call the classifier on each list of positive single class cells, except for the last one!
for (def i=0; i<(CHANNELS-1); i++){
println("ROUND "+i)
int remaining = 0
for (def j = i+1; j<CHANNELS; j++){
remaining +=1
}
//println("SENDING CELLS TO CLASSIFIER "+positive[i].size())
depth = 2
classifier(classList[i].getText(), positive[i], remaining, i)
}
println("clasifier done")
fireHierarchyUpdate()
}
//end Run Button
//////////////////////////
row+=10 //spacer
Button saveButton = new Button()
saveButton.setText("Save Classifier")
secondGridPane.add(saveButton, 1, row, 1, 1)
def TextField saveFile = new TextField("MyClassifier");
saveFile.setMaxWidth( textFieldWidth);
saveFile.setAlignment(Pos.CENTER_RIGHT)
secondGridPane.add( saveFile, 2, row++, 1, 1)
//All stuff for actually classifying cells
saveButton.setOnAction {
def export = []
for (def l=0; l<CHANNELS;l++){
export << [boxes[l].getValue(), lowerTs[l].getText(), classList[l].getText(), upperTs[l].getText(), colorPickers[l].getValue().toString()]
}
mkdirs(buildFilePath(PROJECT_BASE_DIR, "classifiers"))
path = buildFilePath(PROJECT_BASE_DIR, "classifiers",saveFile.getText())
new File(path).withObjectOutputStream {
it.writeObject(export)
}
}
//////////////////////
//End of classifier window
Platform.runLater {
def stage2 = new Stage()
stage2.initOwner(QuPathGUI.getInstance().getStage())
stage2.setScene(new Scene( secondGridPane))
stage2.setTitle("Build Classifier ")
stage2.setWidth(870);
stage2.setHeight(900);
//stage.setResizable(false);
stage2.show()
}
}
//Some stuff that controls the dialog box showing up. I don't really understand it but it is needed.
Platform.runLater {
def stage = new Stage()
stage.initOwner(QuPathGUI.getInstance().getStage())
stage.setScene(new Scene( gridPane))
stage.setTitle("Simple Classifier for Multiple Classes ")
stage.setWidth(550);
stage.setHeight(300);
//stage.setResizable(false);
stage.show()
}
//Recursive function to keep track of what needs to be classified next.
//listAName is the current classifier name (for example Class 1 during the first pass) which gets modified with the intersect
//and would result in cells from this pass being called Class 1,Class2 positive.
//listA is the current list of cells being checked for intersection with the first member of...
//remainingListSize is the number of lists in "positive[]" that the current list needs to be checked against
//position keeps track of the starting position of listAName class. So on the first runthrough everything will start with C1
//The next runthrough will start with position 2 since the base class will be C2
void classifier (listAName, listA, remainingListSize, position = 0){
//println("listofLists " +remainingListSize)
//println("base list size"+listA.size())
for (def y=0; y <remainingListSize; y++){
//println("listofLists in loop" +remainingListSize)
//println("y "+y)
//println("depth"+depth)
k = (position+y+1).intValue()
//println("k "+k)
// get the measurements needed to determine if a cell is a member of the next class (from listOfLists)
def lower = Float.parseFloat(lowerTs[k].getText())
def upper = Float.parseFloat(upperTs[k].getText())
//intersect the listA with the first of the listOfLists
//on the first run, this would take all of Class 1, and compare it with measurements that determine Class 2, resulting in a subset of
//Class 1 that meet both criteria
def passList = listA.findAll {measurement(it, boxes[k].getValue()) >= lower && measurement(it, boxes[k].getValue()) <= upper}
newName = classList[k].getText()
//Create a new name based off of the current name and the newly compared class
// on the first runthrough this would give "Class 1,Class 2 positive"
def mergeName = listAName+","+newName
//println("depth "+depth)
//println(mergeName+" with number of remaining lists "+remainingListSize)
passList.each{
//Check if class being applies is "shorter" than the current class.
//This prevents something like "C2,C3" from overwriting "C1,C2,C3,C4" from the first call.
if (it.getMeasurementList().getMeasurementValue("ClassDepth") < depth) {
it.setPathClass(getPathClass(mergeName+' positive'));
it.getMeasurementList().putMeasurement("ClassDepth", depth)
}
}
if (k == (positive.size()-1)){
//If we are comparing the current list to the last positive class list, we are done
//Go up one level of classifier depth and return
depth -=1
return;
} else{
//Otherwise, move one place further along the "positive" list of base classes, and increase depth
//This happens when going from C1,C2 to C1,C2,C3 etc.
def passAlong = remainingListSize-1
//println("passAlong "+passAlong.size())
//println("name for next " +mergeName)
depth +=1
classifier(mergeName, passList, passAlong, k)
}
//println("loopy depth"+depth)
}
}
//V3 Corrected classification over-write error on classifiers with more than 3 parts
import javafx.application.Platform
import javafx.beans.property.SimpleLongProperty
import javafx.geometry.Insets
import javafx.scene.Scene
import javafx.geometry.Pos
import javafx.scene.control.Button
import javafx.scene.control.Label
import javafx.scene.control.TableView
import javafx.scene.control.TextField
import javafx.scene.control.CheckBox
import javafx.scene.control.ComboBox
import javafx.scene.control.TableColumn
import javafx.scene.control.ColorPicker
import javafx.scene.layout.BorderPane
import javafx.scene.layout.GridPane
import javafx.scene.control.Tooltip
import javafx.stage.Stage
import qupath.lib.gui.QuPathGUI
import qupath.lib.gui.helpers.ColorToolsFX;
import javafx.scene.paint.Color;
//Settings to control the dialog boxes for the GUI
int col = 0
int row = 0
int textFieldWidth = 120
int labelWidth = 150
def gridPane = new GridPane()
gridPane.setPadding(new Insets(10, 10, 10, 10));
gridPane.setVgap(2);
gridPane.setHgap(10);
def server = getCurrentImageData().getServer()
//Upper thresholds will default to the max bit depth, since that is likely the most common upper limit for a given image.
maxPixel = Math.pow((double) 2,(double)server.getBitsPerPixel())-1
positive = []
//print(maxPixel)
def titleLabel = new Label("Intended for use where one marker determines a base class.\nFor example, you could use Channel 1 Cytoplasmic Mean and Channel 2 Nuclear Mean\nto generate two base classes and a Double positive class where each condition is true.\n\n")
gridPane.add(titleLabel,col, row++, 3, 1)
def requestLabel = new Label("How many base classes/single measurements are you interested in?\nThe above example would have two.\n")
gridPane.add(requestLabel,col, row++, 3, 1)
def TextField classText = new TextField("2");
classText.setMaxWidth( textFieldWidth);
classText.setAlignment(Pos.CENTER_RIGHT)
gridPane.add(classText, col++, row, 1, 1)
//ArrayList<Label> channelLabels
Button startButton = new Button()
startButton.setText("Start Classifying")
gridPane.add(startButton, col, row++, 1, 1)
startButton.setTooltip(new Tooltip("If you need to change the number of classes, re-run the script"));
col = 0
row+=10 //spacer
def loadLabel = new Label("Load a classifier:")
gridPane.add(loadLabel,col++, row, 2, 1)
def TextField classFile = new TextField("MyClassifier");
classFile.setMaxWidth( textFieldWidth);
classFile.setAlignment(Pos.CENTER_RIGHT)
gridPane.add( classFile, col++, row, 1, 1)
Button loadButton = new Button()
loadButton.setText("Load Classifier")
gridPane.add(loadButton, col++, row++, 1, 1)
//incredibly lazy and sloppy coding, just a copy and paste taking slightly different inputs
loadButton.setOnAction{
path = buildFilePath(PROJECT_BASE_DIR, "classifiers",classFile.getText())
new File(path).withObjectInputStream {
cObj = it.readObject()
}
//Create an arraylist with the same number of entries as classes
CHANNELS = cObj.size()
col = 0
row = 0
def secondGridPane = new GridPane()
secondGridPane.setPadding(new Insets(10, 10, 10, 10));
secondGridPane.setVgap(2);
secondGridPane.setHgap(10);
def assist = new Label("Short Class Names are recommended as dual positives and beyond use the full names of all positive classes.\n ")
secondGridPane.add(assist, col, row++, 5, 1)
def mChoice = new Label("Measurement ")
mChoice.setMaxWidth(400)
mChoice.setAlignment(Pos.CENTER_RIGHT)
def mLowThresh = new Label("Lower Threshold <= ")
def mHighThresh = new Label("<= Upper Threshold ")
def mClassName = new Label("Class Name ")
secondGridPane.add( mChoice, col++, row, 1,1)
secondGridPane.add( mLowThresh, col++, row, 1,1)
secondGridPane.add( mClassName, col++, row, 1,1)
secondGridPane.add( mHighThresh, col, row++, 1,1)
//create data structures to use for building the classifier
boxes = new ComboBox [CHANNELS]
lowerTs = new TextField [CHANNELS]
classList = new TextField [CHANNELS]
upperTs = new TextField [CHANNELS]
colorPickers = new ColorPicker [CHANNELS]
//create the dialog where the user will select the measurements of interest and values
for (def i=0; i<CHANNELS;i++) {
col =0
//Add to dialog box, new row for each
boxes[i] = new ComboBox()
qupath.lib.classifiers.PathClassificationLabellingHelper.getAvailableFeatures(getDetectionObjects()).each {boxes[i].getItems().add(it) }
boxes[i].setValue(cObj[i][0])
classList[i] = new TextField(cObj[i][2])
lowerTs[i] = new TextField(cObj[i][1])
upperTs[i] = new TextField(cObj[i][3])
classList[i].setMaxWidth( textFieldWidth);
classList[i].setAlignment(Pos.CENTER_RIGHT)
lowerTs[i].setMaxWidth( textFieldWidth);
lowerTs[i].setAlignment(Pos.CENTER_RIGHT)
upperTs[i].setMaxWidth( textFieldWidth);
upperTs[i].setAlignment(Pos.CENTER_RIGHT)
colorPickers[i] = new ColorPicker(Color.web(cObj[i][4]))
secondGridPane.add(boxes[i], col++, row, 1,1)
secondGridPane.add(lowerTs[i], col++, row, 1,1)
secondGridPane.add(classList[i], col++, row, 1, 1)
secondGridPane.add(upperTs[i], col++, row, 1,1)
secondGridPane.add(colorPickers[i], col++, row++, 1,1)
}
Button runButton = new Button()
runButton.setText("Run Classifier")
secondGridPane.add(runButton, 0, row++, 1, 1)
//All stuff for actually classifying cells
runButton.setOnAction {
//set up for classifier
def cells = getCellObjects()
cells.each {it.setPathClass(getPathClass('Negative'))}
startTime = System.currentTimeMillis()
//start classifier with all cells negative
for (def i=0; i<CHANNELS; i++){
def lower = Float.parseFloat(lowerTs[i].getText())
def upper = Float.parseFloat(upperTs[i].getText())
//create lists for each measurement, classify cells based off of those measurements
positive[i] = cells.findAll {measurement(it, boxes[i].getValue()) >= lower && measurement(it, boxes[i].getValue()) <= upper}
positive[i].each {it.setPathClass(getPathClass(classList[i].getText()+' positive')); it.getMeasurementList().putMeasurement("ClassDepth", 1)}
c = colorPickers[i].getValue()
currentPathClass = getPathClass(classList[i].getText()+' positive')
//for some reason setColor needs to be used here instead of setColorRGB which applies to objects and not classes?
currentPathClass.setColor(ColorToolsFX.getRGBA(c))
}
//Call the classifier on each list of positive single class cells, except for the last one!
for (def i=0; i<(CHANNELS-1); i++){
println("ROUND "+i)
int remaining = 0
for (def j = i+1; j<CHANNELS; j++){
remaining +=1
}
//println("SENDING CELLS TO CLASSIFIER "+positive[i].size())
depth = 2
classifier(classList[i].getText(), positive[i], remaining, i)
}
println("clasifier done")
fireHierarchyUpdate()
}
//end Run Button
row+=10 //spacer
Button saveButton = new Button()
saveButton.setText("Save Classifier")
secondGridPane.add(saveButton, 1, row, 1, 1)
def TextField saveFile = new TextField("MyClassifier");
saveFile.setMaxWidth( textFieldWidth);
saveFile.setAlignment(Pos.CENTER_RIGHT)
secondGridPane.add( saveFile, 2, row++, 1, 1)
//All stuff for actually classifying cells
saveButton.setOnAction {
def export = []
for (def l=0; l<CHANNELS;l++){
export << [boxes[l].getValue(), lowerTs[l].getText(), classList[l].getText(), upperTs[l].getText(), colorPickers[l].getValue().toString()]
}
mkdirs(buildFilePath(PROJECT_BASE_DIR, "classifiers"))
path = buildFilePath(PROJECT_BASE_DIR, "classifiers",saveFile.getText())
new File(path).withObjectOutputStream {
it.writeObject(export)
}
}
//End of classifier window
Platform.runLater {
def stage3 = new Stage()
stage3.initOwner(QuPathGUI.getInstance().getStage())
stage3.setScene(new Scene( secondGridPane))
stage3.setTitle("Loaded Classifier "+classFile.getText())
stage3.setWidth(870);
stage3.setHeight(900);
//stage.setResizable(false);
stage3.show()
}
}
//end of the loaded classifier
startButton.setOnAction {
col = 0
row = 0
//Create an arraylist with the same number of entries as classes
CHANNELS = Float.parseFloat(classText.getText())
//channelLabels = new ArrayList( CHANNELS)
def secondGridPane = new GridPane()
secondGridPane.setPadding(new Insets(10, 10, 10, 10));
secondGridPane.setVgap(2);
secondGridPane.setHgap(10);
def assist = new Label("Short Class Names are recommended as dual positives and beyond use the full names of all positive classes.\n ")
secondGridPane.add(assist, col, row++, 5, 1)
def mChoice = new Label("Measurement ")
mChoice.setMaxWidth(400)
mChoice.setAlignment(Pos.CENTER_RIGHT)
def mLowThresh = new Label("Lower Threshold <= ")
def mHighThresh = new Label("<= Upper Threshold ")
def mClassName = new Label("Class Name ")
secondGridPane.add( mChoice, col++, row, 1,1)
secondGridPane.add( mLowThresh, col++, row, 1,1)
secondGridPane.add( mClassName, col++, row, 1,1)
secondGridPane.add( mHighThresh, col, row++, 1,1)
//create data structures to use for building the classifier
boxes = new ComboBox [CHANNELS]
lowerTs = new TextField [CHANNELS]
classList = new TextField [CHANNELS]
upperTs = new TextField [CHANNELS]
colorPickers = new ColorPicker [CHANNELS]
//create the dialog where the user will select the measurements of interest and values
for (def i=0; i<CHANNELS;i++) {
col =0
//Add to dialog box, new row for each
boxes[i] = new ComboBox()
qupath.lib.classifiers.PathClassificationLabellingHelper.getAvailableFeatures(getDetectionObjects()).each {boxes[i].getItems().add(it) }
classList[i] = new TextField("C" + (i+1))
lowerTs[i] = new TextField("0")
upperTs[i] = new TextField(maxPixel.toString())
classList[i].setMaxWidth( textFieldWidth);
classList[i].setAlignment(Pos.CENTER_RIGHT)
lowerTs[i].setMaxWidth( textFieldWidth);
lowerTs[i].setAlignment(Pos.CENTER_RIGHT)
upperTs[i].setMaxWidth( textFieldWidth);
upperTs[i].setAlignment(Pos.CENTER_RIGHT)
colorPickers[i] = new ColorPicker()
secondGridPane.add(boxes[i], col++, row, 1,1)
secondGridPane.add(lowerTs[i], col++, row, 1,1)
secondGridPane.add(classList[i], col++, row, 1, 1)
secondGridPane.add(upperTs[i], col++, row, 1,1)
secondGridPane.add(colorPickers[i], col++, row++, 1,1)
}
Button runButton = new Button()
runButton.setText("Run Classifier")
secondGridPane.add(runButton, 0, row++, 1, 1)
//All stuff for actually classifying cells
runButton.setOnAction {
//set up for classifier
def cells = getCellObjects()
cells.each {it.setPathClass(getPathClass('Negative'))}
//start classifier with all cells negative
for (def i=0; i<CHANNELS; i++){
def lower = Float.parseFloat(lowerTs[i].getText())
def upper = Float.parseFloat(upperTs[i].getText())
//create lists for each measurement, classify cells based off of those measurements
positive[i] = cells.findAll {measurement(it, boxes[i].getValue()) >= lower && measurement(it, boxes[i].getValue()) <= upper}
positive[i].each {it.setPathClass(getPathClass(classList[i].getText()+' positive')); it.getMeasurementList().putMeasurement("ClassDepth", 1)}
c = colorPickers[i].getValue()
currentPathClass = getPathClass(classList[i].getText()+' positive')
//for some reason setColor needs to be used here instead of setColorRGB which applies to objects and not classes?
currentPathClass.setColor(ColorToolsFX.getRGBA(c))
}
//Call the classifier on each list of positive single class cells, except for the last one!
for (def i=0; i<(CHANNELS-1); i++){
println("ROUND "+i)
int remaining = 0
for (def j = i+1; j<CHANNELS; j++){
remaining +=1
}
//println("SENDING CELLS TO CLASSIFIER "+positive[i].size())
depth = 2
classifier(classList[i].getText(), positive[i], remaining, i)
}
println("clasifier done")
fireHierarchyUpdate()
}
//end Run Button
//////////////////////////
row+=10 //spacer
Button saveButton = new Button()
saveButton.setText("Save Classifier")
secondGridPane.add(saveButton, 1, row, 1, 1)
def TextField saveFile = new TextField("MyClassifier");
saveFile.setMaxWidth( textFieldWidth);
saveFile.setAlignment(Pos.CENTER_RIGHT)
secondGridPane.add( saveFile, 2, row++, 1, 1)
//All stuff for actually classifying cells
saveButton.setOnAction {
def export = []
for (def l=0; l<CHANNELS;l++){
export << [boxes[l].getValue(), lowerTs[l].getText(), classList[l].getText(), upperTs[l].getText(), colorPickers[l].getValue().toString()]
}
mkdirs(buildFilePath(PROJECT_BASE_DIR, "classifiers"))
path = buildFilePath(PROJECT_BASE_DIR, "classifiers",saveFile.getText())
new File(path).withObjectOutputStream {
it.writeObject(export)
}
}
//////////////////////
//End of classifier window
Platform.runLater {
def stage2 = new Stage()
stage2.initOwner(QuPathGUI.getInstance().getStage())
stage2.setScene(new Scene( secondGridPane))
stage2.setTitle("Build Classifier ")
stage2.setWidth(870);
stage2.setHeight(900);
//stage.setResizable(false);
stage2.show()
}
}
//Some stuff that controls the dialog box showing up. I don't really understand it but it is needed.
Platform.runLater {
def stage = new Stage()
stage.initOwner(QuPathGUI.getInstance().getStage())
stage.setScene(new Scene( gridPane))
stage.setTitle("Simple Classifier for Multiple Classes ")
stage.setWidth(550);
stage.setHeight(300);
//stage.setResizable(false);
stage.show()
}
//Recursive function to keep track of what needs to be classified next.
//listAName is the current classifier name (for example Class 1 during the first pass) which gets modified with the intersect
//and would result in cells from this pass being called Class 1,Class2 positive.
//listA is the current list of cells being checked for intersection with the first member of...
//remainingListSize is the number of lists in "positive[]" that the current list needs to be checked against
//position keeps track of the starting position of listAName class. So on the first runthrough everything will start with C1
//The next runthrough will start with position 2 since the base class will be C2
void classifier (listAName, listA, remainingListSize, position = 0){
//println("listofLists " +remainingListSize)
//println("base list size"+listA.size())
for (def y=0; y <remainingListSize; y++){
//println("listofLists in loop" +remainingListSize)
//println("y "+y)
//println("depth"+depth)
k = (position+y+1).intValue()
//println("k "+k)
// get the measurements needed to determine if a cell is a member of the next class (from listOfLists)
def lower = Float.parseFloat(lowerTs[k].getText())
def upper = Float.parseFloat(upperTs[k].getText())
//intersect the listA with the first of the listOfLists
//on the first run, this would take all of Class 1, and compare it with measurements that determine Class 2, resulting in a subset of
//Class 1 that meet both criteria
def passList = listA.findAll {measurement(it, boxes[k].getValue()) >= lower && measurement(it, boxes[k].getValue()) <= upper}
newName = classList[k].getText()
//Create a new name based off of the current name and the newly compared class
// on the first runthrough this would give "Class 1,Class 2 positive"
def mergeName = listAName+","+newName
//println("depth "+depth)
//println(mergeName+" with number of remaining lists "+remainingListSize)
passList.each{
//Check if class being applies is "shorter" than the current class.
//This prevents something like "C2,C3" from overwriting "C1,C2,C3,C4" from the first call.
if (it.getMeasurementList().getMeasurementValue("ClassDepth") < depth) {
it.setPathClass(getPathClass(mergeName+' positive'));
it.getMeasurementList().putMeasurement("ClassDepth", depth)
}
}
if (k == (positive.size()-1)){
//If we are comparing the current list to the last positive class list, we are done
//Go up one level of classifier depth and return
depth -=1
return;
} else{
//Otherwise, move one place further along the "positive" list of base classes, and increase depth
//This happens when going from C1,C2 to C1,C2,C3 etc.
def passAlong = remainingListSize-1
//println("passAlong "+passAlong.size())
//println("name for next " +mergeName)
depth +=1
classifier(mergeName, passList, passAlong, k)
}
//println("loopy depth"+depth)
}
}
//V3 Corrected classification over-write error on classifiers with more than 3 parts
import qupath.lib.gui.helpers.ColorToolsFX;
import javafx.scene.paint.Color;
//Hopefully you can simply replace the fileName with your classifier, and include this is a script.
fileName = "MyClassifier"
positive = []
path = buildFilePath(PROJECT_BASE_DIR, "classifiers",fileName)
new File(path).withObjectInputStream {
cObj = it.readObject()
}
//Create an arraylist with the same number of entries as classes
CHANNELS = cObj.size()
//println(cObj)
//set up for classifier
def cells = getDetectionObjects()
cells.each {it.setPathClass(getPathClass('Negative'))}
//start classifier with all cells negative
for (def i=0; i<CHANNELS; i++){
def lower = Float.parseFloat(cObj[i][1])
def upper = Float.parseFloat(cObj[i][3])
//create lists for each measurement, classify cells based off of those measurements
positive[i] = cells.findAll {measurement(it, cObj[i][0]) >= lower && measurement(it, cObj[i][0]) <= upper}
positive[i].each {it.setPathClass(getPathClass(cObj[i][2]+' positive')); it.getMeasurementList().putMeasurement("ClassDepth", 1)}
c = Color.web(cObj[i][4])
currentPathClass = getPathClass(cObj[i][2]+' positive')
//for some reason setColor needs to be used here instead of setColorRGB which applies to objects and not classes?
currentPathClass.setColor(ColorToolsFX.getRGBA(c))
}
for (def i=0; i<(CHANNELS-1); i++){
//println(i)
int remaining = 0
for (def j = i+1; j<CHANNELS; j++){
remaining +=1
}
depth = 2
classifier(cObj[i][2], positive[i], remaining, i)
}
fireHierarchyUpdate()
def classifier (listAName, listA, remainingListSize, position){
//current point in the list of lists, allows access to the measurements needed to figure out what from the current class is also part of the next class
for (def y=0; y <remainingListSize; y++){
k = (position+y+1).intValue()
// get the measurements needed to determine if a cell is a member of the next class (from listOfLists)
def lower = Float.parseFloat(cObj[k][1])
def upper = Float.parseFloat(cObj[k][3])
//intersect the listA with the first of the listOfLists
//on the first run, this would take all of Class 1, and compare it with measurements that determine Class 2, resulting in a subset of
//Class 1 that meet both criteria
def passList = listA.findAll {measurement(it, cObj[k][0]) >= lower && measurement(it, cObj[k][0]) <= upper}
newName = cObj[k][2]
//Create a new name based off of the current name and the newly compared class
// on the first runthrough this would give "Class 1,Class 2 positive"
def mergeName = listAName+","+newName
passList.each{
if (it.getMeasurementList().getMeasurementValue("ClassDepth") < depth) {
it.setPathClass(getPathClass(mergeName+' positive'));
it.getMeasurementList().putMeasurement("ClassDepth", depth)
}
}
if (k == (positive.size()-1)){
//println(passList.size()+"number of "+mergeName+" cells passed")
for (def z=0; z<CHANNELS; z++){
//println("before"+positive[z].size())
positive[z] = positive[z].minus(passList)
//println(z+" after "+positive[z].size())
}
depth -=1
return;
} else{
def passAlong = remainingListSize-1
//println("passAlong "+passAlong.size())
//println("name for next " +mergeName)
depth +=1
classifier(mergeName, passList, passAlong, k)
}
}
}
//V3 Corrected classification over-write error on classifiers with more than 3 parts
import qupath.lib.gui.helpers.ColorToolsFX;
import javafx.scene.paint.Color;
//Hopefully you can simply replace the fileName with your classifier, and include this is a script.
fileName = "MyClassifier"
positive = []
path = buildFilePath(PROJECT_BASE_DIR, "classifiers",fileName)
new File(path).withObjectInputStream {
cObj = it.readObject()
}
//Create an arraylist with the same number of entries as classes
CHANNELS = cObj.size()
//println(cObj)
//set up for classifier
def cells = getCellObjects()
cells.each {it.setPathClass(getPathClass('Negative'))}
//start classifier with all cells negative
for (def i=0; i<CHANNELS; i++){
def lower = Float.parseFloat(cObj[i][1])
def upper = Float.parseFloat(cObj[i][3])
//create lists for each measurement, classify cells based off of those measurements
positive[i] = cells.findAll {measurement(it, cObj[i][0]) >= lower && measurement(it, cObj[i][0]) <= upper}
positive[i].each {it.setPathClass(getPathClass(cObj[i][2]+' positive')); it.getMeasurementList().putMeasurement("ClassDepth", 1)}
c = Color.web(cObj[i][4])
currentPathClass = getPathClass(cObj[i][2]+' positive')
//for some reason setColor needs to be used here instead of setColorRGB which applies to objects and not classes?
currentPathClass.setColor(ColorToolsFX.getRGBA(c))
}
for (def i=0; i<(CHANNELS-1); i++){
//println(i)
int remaining = 0
for (def j = i+1; j<CHANNELS; j++){
remaining +=1
}
depth = 2
classifier(cObj[i][2], positive[i], remaining, i)
}
fireHierarchyUpdate()
def classifier (listAName, listA, remainingListSize, position){
//current point in the list of lists, allows access to the measurements needed to figure out what from the current class is also part of the next class
for (def y=0; y <remainingListSize; y++){
k = (position+y+1).intValue()
// get the measurements needed to determine if a cell is a member of the next class (from listOfLists)
def lower = Float.parseFloat(cObj[k][1])
def upper = Float.parseFloat(cObj[k][3])
//intersect the listA with the first of the listOfLists
//on the first run, this would take all of Class 1, and compare it with measurements that determine Class 2, resulting in a subset of
//Class 1 that meet both criteria
def passList = listA.findAll {measurement(it, cObj[k][0]) >= lower && measurement(it, cObj[k][0]) <= upper}
newName = cObj[k][2]
//Create a new name based off of the current name and the newly compared class
// on the first runthrough this would give "Class 1,Class 2 positive"
def mergeName = listAName+","+newName
passList.each{
if (it.getMeasurementList().getMeasurementValue("ClassDepth") < depth) {
it.setPathClass(getPathClass(mergeName+' positive'));
it.getMeasurementList().putMeasurement("ClassDepth", depth)
}
}
if (k == (positive.size()-1)){
//println(passList.size()+"number of "+mergeName+" cells passed")
for (def z=0; z<CHANNELS; z++){
//println("before"+positive[z].size())
positive[z] = positive[z].minus(passList)
//println(z+" after "+positive[z].size())
}
depth -=1
return;
} else{
def passAlong = remainingListSize-1
//println("passAlong "+passAlong.size())
//println("name for next " +mergeName)
depth +=1
classifier(mergeName, passList, passAlong, k)
}
}
}
//Description of use found here: https://petebankhead.github.io/qupath/scripts/2018/08/08/three-regions.html
/**
* Script to help with annotating tumor regions, separating the tumor margin from the center.
*
* Here, each of the margin regions is approximately 500 microns in width.
*
* @author Pete Bankhead
*/
import qupath.lib.common.GeneralTools
import qupath.lib.objects.PathAnnotationObject
import qupath.lib.objects.PathObject
import qupath.lib.roi.PathROIToolsAwt
import java.awt.Rectangle
import java.awt.geom.Area
import static qupath.lib.scripting.QPEx.*
//-----
// Some things you might want to change
// How much to expand each region
double expandMarginMicrons = 500.0
// Define the colors
def coloInnerMargin = getColorRGB(0, 0, 200)
def colorOuterMargin = getColorRGB(0, 200, 0)
def colorCentral = getColorRGB(0, 0, 0)
// Choose whether to lock the annotations or not (it's generally a good idea to avoid accidentally moving them)
def lockAnnotations = true
//-----
// Extract the main info we need
def imageData = getCurrentImageData()
def hierarchy = imageData.getHierarchy()
def server = imageData.getServer()
// We need the pixel size
if (!server.hasPixelSizeMicrons()) {
print 'We need the pixel size information here!'
return
}
if (!GeneralTools.almostTheSame(server.getPixelWidthMicrons(), server.getPixelHeightMicrons(), 0.0001)) {
print 'Warning! The pixel width & height are different; the average of both will be used'
}
// Get annotation & detections
def annotations = getAnnotationObjects()
def selected = getSelectedObject()
if (selected == null || !selected.isAnnotation()) {
print 'Please select an annotation object!'
return
}
// We need one selected annotation as a starting point; if we have other annotations, they will constrain the output
annotations.remove(selected)
// If we have at most one other annotation, it represents the tissue
Area areaTissue
PathObject tissueAnnotation
if (annotations.isEmpty()) {
areaTissue = new Area(new Rectangle(0, 0, server.getWidth(), server.getHeight()))
} else if (annotations.size() == 1) {
tissueAnnotation = annotations.get(0)
areaTissue = PathROIToolsAwt.getArea(tissueAnnotation.getROI())
} else {
print 'Sorry, this script only support one selected annotation for the tumor region, and at most one other annotation to constrain the expansion'
return
}
// Calculate how much to expand
double expandPixels = expandMarginMicrons / server.getAveragedPixelSizeMicrons()
def roiOriginal = selected.getROI()
def areaTumor = PathROIToolsAwt.getArea(roiOriginal)
// Get the outer margin area
def areaOuter = PathROIToolsAwt.shapeMorphology(areaTumor, expandPixels)
areaOuter.subtract(areaTumor)
areaOuter.intersect(areaTissue)
def roiOuter = PathROIToolsAwt.getShapeROI(areaOuter, roiOriginal.getC(), roiOriginal.getZ(), roiOriginal.getT())
def annotationOuter = new PathAnnotationObject(roiOuter)
annotationOuter.setName("Outer margin")
annotationOuter.setColorRGB(colorOuterMargin)
// Get the central area
def areaCentral = PathROIToolsAwt.shapeMorphology(areaTumor, -expandPixels)
areaCentral.intersect(areaTissue)
def roiCentral = PathROIToolsAwt.getShapeROI(areaCentral, roiOriginal.getC(), roiOriginal.getZ(), roiOriginal.getT())
def annotationCentral = new PathAnnotationObject(roiCentral)
annotationCentral.setName("Center")
annotationCentral.setColorRGB(colorCentral)
// Get the inner margin area
areaInner = areaTumor
areaInner.subtract(areaCentral)
areaInner.intersect(areaTissue)
def roiInner = PathROIToolsAwt.getShapeROI(areaInner, roiOriginal.getC(), roiOriginal.getZ(), roiOriginal.getT())
def annotationInner = new PathAnnotationObject(roiInner)
annotationInner.setName("Inner margin")
annotationInner.setColorRGB(coloInnerMargin)
// Add the annotations
hierarchy.getSelectionModel().clearSelection()
hierarchy.removeObject(selected, true)
def annotationsToAdd = [annotationOuter, annotationInner, annotationCentral];
annotationsToAdd.each {it.setLocked(lockAnnotations)}
if (tissueAnnotation == null) {
hierarchy.addPathObjects(annotationsToAdd, false)
} else {
tissueAnnotation.addPathObjects(annotationsToAdd)
hierarchy.fireHierarchyChangedEvent(this, tissueAnnotation)
if (lockAnnotations)
tissueAnnotation.setLocked(true)
}
//Not well commented, but the overall purpose of this script is to
//1. detect tissue in a brightfield image
//2. send the tissue to ImageJ
//2.5 you may need to cut up your tissue if the image is too large using something like
//runPlugin('qupath.lib.algorithms.TilerPlugin', '{"tileSizePx": 10000.0, "trimToROI": true, "makeAnnotations": true, "removeParentAnnotation": true}');
//at which point you will also need to place selectAnnotations() mergeSelectedAnnotations() before doing the calculations at the end
//3. Using an ImageJ macro (with all comments and newline characters removed) threshold and find all empty spots
//The values for this will depend greatly on image quality, background, and brightness. You may also want to adjust size thresholds for the Analyze Particles... command
//4. Once the detection objects are returned, sum up their areas, and divide by the total parent annotation area to find a percentage lipid area
// Since all of the detections exist as objects, you could also perform other analyses of them by adding circularity measurements etc. (Calculate Features/Add Shape Features)
import qupath.imagej.plugins.ImageJMacroRunner
import qupath.lib.plugins.parameters.ParameterList
runPlugin('qupath.imagej.detect.tissue.SimpleTissueDetection2', '{"threshold": 161, "requestedDownsample": 1.0, "minAreaPixels": 1.0E7, "maxHoleAreaPixels": 25500.0, "darkBackground": false, "smoothImage": true, "medianCleanup": true, "dilateBoundaries": false, "smoothCoordinates": true, "excludeOnBoundary": false, "singleAnnotation": true}');
// Create a macro runner so we can check what the parameter list contains
def params = new ImageJMacroRunner(getQuPath()).getParameterList()
print ParameterList.getParameterListJSON(params, ' ')
// Change the value of a parameter, using the JSON to identify the key
params.getParameters().get('downsampleFactor').setValue(1.0 as double)
params.getParameters().get('getOverlay').setValue(true)
params.getParameters().get('clearObjects').setValue(true)
print ParameterList.getParameterListJSON(params, ' ')
// Get the macro text and other required variables
def macro = 'min=newArray(3);max=newArray(3);filter=newArray(3);a=getTitle();run("HSB Stack");run("Convert Stack to Images");selectWindow("Hue");rename("0");selectWindow("Saturation");rename("1");selectWindow("Brightness");rename("2");min[0]=9;max[0]=255;filter[0]="pass";min[1]=0;max[1]=45;filter[1]="pass";min[2]=136;max[2]=255;filter[2]="pass";for (i=0;i<3;i++){ selectWindow(""+i); setThreshold(min[i], max[i]); run("Convert to Mask"); if (filter[i]=="stop") run("Invert");}imageCalculator("AND create", "0","1");imageCalculator("AND create", "Result of 0","2");for (i=0;i<3;i++){ selectWindow(""+i); close();}selectWindow("Result of 0");close();selectWindow("Result of Result of 0");rename(a);run("Smooth");run("Smooth");run("Smooth");run("Smooth");run("Smooth");run("Smooth");run("Smooth");run("Make Binary");run("Despeckle");run("Despeckle");run("Analyze Particles...", "size=200-19500 show=[Overlay Masks] add");'
def imageData = getCurrentImageData()
//***********************************************************************************************
//YOU MAY NEED TO CREATE TILE ANNOTATIONS HERE DEPENDING ON THE SIZE OF YOUR IMAGE + REMOVE PARENT
//I recommend the largest tiles you can possibly get away with since they will disrupt adipocyte detection
//*******************************************************************************************
def annotations = getAnnotationObjects()
// Loop through the annotations and run the macro
for (annotation in annotations) {
ImageJMacroRunner.runMacro(params, imageData, null, annotation, macro)
}
//selectAnnotations()+mergeSelectedObjects() here if needed.
selected = getAnnotationObjects()
removeObject(selected[0], true)
addObject(selected[0])
selected[0].setLocked(true)
selectObjects{p -> (p.getLevel()==1) && (p.isAnnotation() == false)};
clearSelectedObjects(false);
import qupath.lib.objects.PathDetectionObject
hierarchy = getCurrentHierarchy()
for (annotation in getAnnotationObjects()){
//Block 1
def tiles = hierarchy.getDescendantObjects(annotation,null, PathDetectionObject)
double totalArea = 0
for (def tile in tiles){
totalArea += tile.getROI().getArea()
}
annotation.getMeasurementList().putMeasurement("Marked area px", totalArea)
def annotationArea = annotation.getROI().getArea()
annotation.getMeasurementList().putMeasurement("Marked area %", totalArea/annotationArea*100)
}
print 'Done!'
// Set of scripts for running multiple cell detections in sequence. Can be used as many times as needed, though it might be best
//to use different detection file names for each iteration through the scripts
//Updated 24/2/19 for to ensure directory exists
//STEP 1
selectAnnotations()
//YOUR CELL DETECTION LINE HERE
mkdirs(buildFilePath(PROJECT_BASE_DIR, 'detection object files'))
def path = buildFilePath(PROJECT_BASE_DIR, 'detection object files', getCurrentImageData().getServer().getShortServerName()+' objects')
def detections = getCellObjects() //.collect {new qupath.lib.objects.PathCellObject(it.getROI(), it.getPathClass())}
new File(path).withObjectOutputStream {
it.writeObject(detections)
}
print 'Done!'
//STEP2
//Run another cell detection
//STEP3
def path = buildFilePath(PROJECT_BASE_DIR, 'detection object files', getCurrentImageData().getServer().getShortServerName()+' objects')
def detections = null
new File(path).withObjectInputStream {
detections = it.readObject()
}
addObjects(detections)
fireHierarchyUpdate()
print 'Added ' + detections
//STEP4
//Check for overlapping cells. This script simply eliminates smaller cells within larger cells, but this may not always be the
//criterion you want to use. Adjust as necessary
hierarchy = getCurrentHierarchy()
def parentCellsList = []
getAnnotationObjects().each{ parentCellsList << it.getChildObjects().findAll{p->p.getChildObjects().size()>0} }
parentCellsList.each{
it.each{
removeObjects(it.getChildObjects(), false)
}
}
fireHierarchyUpdate()
print "Done"
//Seems to work for Version 1.2, will not work with 1.3 and future builds
//Creating tiled areas and summing them for area based measurements.
//Setup functions, adjust to taste for negative detection
//runPlugin('qupath.imagej.detect.tissue.SimpleTissueDetection2', '{"threshold": 204, "requestedPixelSizeMicrons": 3.0, "minAreaMicrons": 5000000.0, "maxHoleAreaMicrons": 500.0, "darkBackground": false, "smoothImage": false, "medianCleanup": false, "dilateBoundaries": false, "smoothCoordinates": false, "excludeOnBoundary": false, "singleAnnotation": true}');
//This line should almost always be run first and then manually checked for accuracy of stain/artifacts.
//Choose your color deconvolutions here, I named them RED and BROWN for simplicity
//For this script, THE STAIN YOU ARE LOOKING FOR SHOULD COME FIRST - Stain 1 is Red for a PicroSirius Red detection
setColorDeconvolutionStains('{"Name" : "Red with background", "Stain 1" : "RED", "Values 1" : "0.25052 0.76455 0.59388 ", "Stain 2" : "BROWN", "Values 2" : "0.22949 0.5595 0.79642 ", "Background" : " 255 255 255 "}')
//Subdivide your annotation area into tiles
selectAnnotations();
runPlugin('qupath.lib.algorithms.TilerPlugin', '{"tileSizeMicrons": 1000.0, "trimToROI": true, "makeAnnotations": true, "removeParentAnnotation": false}');
//Perform the positive pixel detection on the tiles, but not the larger annotation
def tiles = getAnnotationObjects().findAll {it.getDisplayedName().toString().contains('Tile') == true}
getCurrentHierarchy().getSelectionModel().setSelectedObjects(tiles, null)
runPlugin('qupath.imagej.detect.tissue.PositivePixelCounterIJ', '{"downsampleFactor": 1, "gaussianSigmaMicrons": 0.3, "thresholdStain1": 0.2, "thresholdStain2": 0.1, "addSummaryMeasurements": true}');
//Sum the areas of each tile
def total_Negative = 0
for (tile in tiles){
total_Negative += tile.getMeasurementList().getMeasurementValue("Negative pixel count")
}
//summary[0] should be the original annotation, this assumes that there was only one original annotation
def summary = getAnnotationObjects().findAll {it.getDisplayedName().toString().contains('Tile') != true}
summary[0].getMeasurementList().putMeasurement("Negative Pixel Sum", total_Negative)
def total_area = summary[0].getROI().getArea()
summary[0].getMeasurementList().putMeasurement("Percentage PSR Positive", total_Negative/total_area*100)
//Remove all of the tile annotations which would result in less readable output than a single tissue value
removeObjects(tiles,true)
//The following goes at the end of basically any script that ends with useful measurements that are part of the Annotation
/*
* QuPath v0.1.2 has some bugs that make exporting annotations a bit annoying, specifically it doesn't include the 'dot'
* needed in the filename if you run it in batch, and it might put the 'slashes' the wrong way on Windows.
* Manually fixing these afterwards is not very fun.
*
* Anyhow, until this is fixed you could try the following script with Run -> Run for Project.
* It should create a new subdirectory in the project, and write text files containing results there.
*
* @author Pete Bankhead
*/
def name = getProjectEntry().getImageName() + '.txt'
def path = buildFilePath(PROJECT_BASE_DIR, 'annotation results')
mkdirs(path)
path = buildFilePath(path, name)
saveAnnotationMeasurements(path)
print 'Results exported to ' + path
//QUPATH VERSION 1.3- Does NOT work with 1.2
//Creating tiled areas and summing them for area based measurements applied to the original tissue annotation. Assumes 1 annotation, but could be expanded to handle multiple.
//runPlugin('qupath.imagej.detect.tissue.SimpleTissueDetection2', '{"threshold": 204, "requestedPixelSizeMicrons": 3.0, "minAreaMicrons": 5000000.0, "maxHoleAreaMicrons": 500.0, "darkBackground": false, "smoothImage": false, "medianCleanup": false, "dilateBoundaries": false, "smoothCoordinates": false, "excludeOnBoundary": false, "singleAnnotation": true}');
//This line should almost always be run first, and then manually checked for accuracy of stain/artifacts.
server = getCurrentImageData().getServer()
//Choose your color deconvolutions here, I named them RED and BROWN for simplicity
//For this script, THE STAIN YOU ARE LOOKING FOR SHOULD COME FIRST - Stain 1 is Red for a PicroSirius Red detection
setColorDeconvolutionStains('{"Name" : "Red with background", "Stain 1" : "RED", "Values 1" : "0.25052 0.76455 0.59388 ", "Stain 2" : "BROWN", "Values 2" : "0.22949 0.5595 0.79642 ", "Background" : " 255 255 255 "}')
//Subdivide your annotation area into tiles
selectAnnotations();
runPlugin('qupath.lib.algorithms.TilerPlugin', '{"tileSizeMicrons": 1000.0, "trimToROI": true, "makeAnnotations": true, "removeParentAnnotation": false}');
//Perform the positive pixel detection on the tiles, but not the larger annotation
def tiles = getAnnotationObjects().findAll {it.getDisplayedName().toString().contains('Tile') == true}
getCurrentHierarchy().getSelectionModel().setSelectedObjects(tiles, null)
runPlugin('qupath.imagej.detect.tissue.PositivePixelCounterIJ', '{"downsampleFactor": 1, "gaussianSigmaMicrons": 0.3, "thresholdStain1": 0.2, "thresholdStain2": 0.1, "addSummaryMeasurements": true}');
//Calculate the percentage of "negative" positive pixels, and apply that to the original tissue annotation
def total_Negative = 0
def total_Positive = 0
//Sum the areas of each tile
for (tile in tiles){
total_Negative += tile.getMeasurementList().getMeasurementValue("Negative pixel area µm^2")
total_Positive += tile.getMeasurementList().getMeasurementValue("Positive pixel area µm^2")
}
//summary[0] should be the original annotation, this assumes that there was only one original annotation
def summary = getAnnotationObjects().findAll {it.getDisplayedName().toString().contains('Tile') != true}
//Math
summary[0].getMeasurementList().putMeasurement("Negative Area Sum", total_Negative)
def total_area = summary[0].getROI().getArea()*server.getPixelHeightMicrons()*server.getPixelWidthMicrons()
summary[0].getMeasurementList().putMeasurement("Percentage PSR Positive", total_Negative/total_area*100)
summary[0].getMeasurementList().putMeasurement("Percentage Too Dark", total_Positive/total_area*100)
//Remove all of the tile annotations which would result in less readable output than a single tissue value
removeObjects(tiles,true)
/*
* QuPath v0.1.2 has some bugs that make exporting annotations a bit annoying, specifically it doesn't include the 'dot'
* needed in the filename if you run it in batch, and it might put the 'slashes' the wrong way on Windows.
* Manually fixing these afterwards is not very fun.
*
* Anyhow, until this is fixed you could try the following script with Run -> Run for Project.
* It should create a new subdirectory in the project, and write text files containing results there.
*
* @author Pete Bankhead
*/
def name = getProjectEntry().getImageName() + '.txt'
def path = buildFilePath(PROJECT_BASE_DIR, 'annotation results')
mkdirs(path)
path = buildFilePath(path, name)
saveAnnotationMeasurements(path)
print 'Results exported to ' + path
//Once your nucleus detection is settled using Cell Detection, replace the cell detection line of code with your own
//The first few lines of code create a whole image object and lock it so that you can draw annotations within.
createSelectAllObject(true);
selected = getSelectedObject()
selected.setLocked(true)
runPlugin('qupath.imagej.detect.nuclei.WatershedCellDetection', '{"detectionImageFluorescence": 3, "requestedPixelSizeMicrons": 0.5, "backgroundRadiusMicrons": 0.0, "medianRadiusMicrons": 1.0, "sigmaMicrons": 1.5, "minAreaMicrons": 50.0, "maxAreaMicrons": 600.0, "threshold": 400, "watershedPostProcess": true, "cellExpansionMicrons": 0.0, "includeNuclei": true, "smoothBoundaries": true, "makeMeasurements": true}');
//Step 2 is entirely manual at this point and requires that you hand draw your cytoplasms
//BEFORE running this script, draw your cytoplasmic areas with the annotation drawing tools in QuPath. Once you are set, you should
//be able to run this script in order to merge the cytoplasms with the nuclei to create cells. This will not work if the
//cytoplasms cross outside of the area defined by the largest annotation object.
//At the end it generates some cell shape measurements.
import qupath.lib.objects.PathCellObject
// Get the current hierarchy
def hierarchy = getCurrentHierarchy()
// Get the select objects
def targets = getObjects{return it.getLevel()!=1 && it.isAnnotation()}
// Check we have anything to work with
if ( targets.isEmpty()) {
print("No objects selected!")
return
}
// Loop through objects
def newDetections = new ArrayList<>()
for (def cellAnnotation in targets) {
// Unlikely to happen... but skip any objects not having a ROI
if (!cellAnnotation.hasROI()) {
print("Skipping object without ROI: " + cellAnnotation)
continue
}
def nucleus = hierarchy.getDescendantObjects(cellAnnotation, null, null)
def roiNuc = nucleus[0].getROI()
def roiCyto = cellAnnotation.getROI()
def nucMeasure = nucleus[0].getMeasurementList()
def cell = new PathCellObject(roiCyto,roiNuc,cellAnnotation.getPathClass(),nucMeasure)
newDetections.add(cell)
print("Adding " + cell)
//remove stand alone nucleus
removeObject(nucleus[0], true)
}
removeObjects( targets, true)
// Actually add the objects
hierarchy.addPathObjects(newDetections, false)
fireHierarchyUpdate()
if (newDetections.size() > 1)
print("Added " + newDetections.size() + " detections(s)")
selectDetections()
runPlugin('qupath.lib.plugins.objects.ShapeFeaturesPlugin', '{"area": true, "perimeter": true, "circularity": true, "useMicrons": true}');
//Recreate your whole image annotation.
createSelectAllObject(true);
//Finally, add some measurements to the cell that would allow you to classify them more easily than the whole cell measurements
//generated by the Add Intensity Features command
//Calculate the mean cytoplasmic intensities in an IF image base on nuclear intensities and whole cell intensities
import qupath.lib.objects.*
def addColors(){
runPlugin('qupath.lib.algorithms.IntensityFeaturesPlugin', '{"pixelSizeMicrons": '+getCurrentImageData().getServer().getPixelWidthMicrons()+', "region": "ROI", "tileSizeMicrons": 25.0, "channel1": true, "channel2": true, "channel3": true, "channel4": true, "doMean": true, "doStdDev": false, "doMinMax": false, "doMedian": false, "doHaralick": false, "haralickDistance": 1, "haralickBins": 32}');
}
//The only thing beyond this point that should need to be modified is the removalList command at the end, which you can disable
//if you wish to keep whole cell measurements
// Get cells & create temporary nucleus objects - storing link to cell in a map
def cells = getCellObjects()
def map = [:]
for (cell in cells) {
def detection = new PathDetectionObject(cell.getNucleusROI())
map[detection] = cell
}
// Get the nuclei as a list
def nuclei = map.keySet() as List
// and then select the nuclei
getCurrentHierarchy().getSelectionModel().setSelectedObjects(nuclei, null)
// Add as many sets of color deconvolution stains and Intensity features plugins as you want here
//This section ONLY adds measurements to the temporary nucleus objects, not the cell
addColors()
//etc etc. make sure each set has different names for the stains or else they will overwrite
// Don't need selection now
clearSelectedObjects()
// Can update measurements generated for the nucleus to the parent cell's measurement list
for (nucleus in nuclei) {
def cell = map[nucleus]
def cellMeasurements = cell.getMeasurementList()
for (key in nucleus.getMeasurementList().getMeasurementNames()) {
double value = nucleus.getMeasurementList().getMeasurementValue(key)
def listOfStrings = key.tokenize(':')
def baseValueName = listOfStrings[-2]+listOfStrings[-1]
nuclearName = "Nuclear" + baseValueName
cellMeasurements.putMeasurement(nuclearName, value)
}
cellMeasurements.closeList()
}
//I want to remove the original whole cell measurements which contain the mu symbol
// Not yet sure I will find the whole cell useful so not adding it back in yet.
def removalList = []
//Create whole cell measurements for all of the above stains
selectDetections()
addColors()
//Create cytoplasmic measurements by subtracting the nuclear measurements from the whole cell, based total intensity (mean value*area)
for (cell in cells) {
//A mess of things I could probably call within functions
def cellMeasurements = cell.getMeasurementList()
double cellArea = cell.getMeasurementList().getMeasurementValue("Cell Shape: Area µm^2")
double nuclearArea = cell.getMeasurementList().getMeasurementValue("Nucleus Shape: Area µm^2")
double cytoplasmicArea = cellArea-nuclearArea
for (key in cell.getMeasurementList().getMeasurementNames()) {
//check if the value is one of the added intensity measurements
if (key.contains("per pixel")){
//check if we already have this value in the list.
//probably an easier way to do this outside of every cycle of the for loop
if (!removalList.contains(key)) removalList<<key
double value = cell.getMeasurementList().getMeasurementValue(key)
//calculate the sum of the OD measurements
cellOD = value * cellArea
//break each measurement into component parts, then take the last two
// which will usually contain the color vector and "mean"
def listOfStrings = key.tokenize(':')
def baseValueName = listOfStrings[-2]+listOfStrings[-1]
//access the nuclear value version of the base name, and use it and the whole cell value to
//calcuate the rough cytoplasmic value
def cytoplasmicKey = "Cytopasmic" + baseValueName
def nuclearKey = "Nuclear" + baseValueName
def nuclearOD = nuclearArea * cell.getMeasurementList().getMeasurementValue(nuclearKey)
def cytoplasmicValue = (cellOD - nuclearOD)/cytoplasmicArea
cellMeasurements.putMeasurement(cytoplasmicKey, cytoplasmicValue)
cellMeasurements.putMeasurement("Cytoplasm Shape: Area µm^2", cytoplasmicArea)
}
}
cellMeasurements.closeList()
}
removalList.each {println(it)}
//comment out this line if you want the whole cell measurements.
removalList.each {removeMeasurements(qupath.lib.objects.PathCellObject, it)}
//************************************************************//
fireHierarchyUpdate()
println "Done!"
//v3.8
//This version REMOVES any current annotations. Comment out lines 30-31 ish to prevent this from happening.
//Important to note that you will almost certainly need to downsample significantly for any whole slide image.
//The script will error out VERY quickly otherwise, and I am not programmery enough to handle that cleanly. Crash away!
import javafx.application.Platform
import javafx.beans.property.SimpleLongProperty
import javafx.geometry.Insets
import javafx.scene.Scene
import javafx.geometry.Pos
import javafx.scene.control.Button
import javafx.scene.control.Label
import javafx.scene.control.TableView
import javafx.scene.control.TextField
import javafx.scene.control.CheckBox
import javafx.scene.control.TableColumn
import javafx.scene.layout.BorderPane
import javafx.scene.layout.GridPane
import javafx.scene.control.Tooltip
import javafx.stage.Stage
import qupath.lib.gui.QuPathGUI
import qupath.imagej.plugins.ImageJMacroRunner
import qupath.lib.plugins.parameters.ParameterList
import qupath.lib.roi.*
import qupath.lib.objects.*
def imageData = getCurrentImageData()
def server = imageData.getServer()
//Initially clear all objects and create a whole image annotation. You could instead delete this annotation and create your own
clearAllObjects()
createSelectAllObject(true);
getAnnotationObjects().each{it.setLocked(true)}
//calculate bit depth for initially suggested upper threhsold
int maxPixel = Math.pow((double) 2,(double)server.getBitsPerPixel())-1
def pixelSize = server.getPixelHeightMicrons()
//Some values for setting up the dialog box
int col = 0
int row = 0
int textFieldWidth = 100
int labelWidth = 150
def gridPane = new GridPane()
gridPane.setPadding(new Insets(10, 10, 10, 10));
gridPane.setVgap(5);
gridPane.setHgap(10);
def titleLabel = new Label("Adjust the current annotation or create a new one.\nMultiple overlapping annotations are not recommended")
gridPane.add(titleLabel,col, row++, 2, 1)
titleLabel.setTooltip(new Tooltip("The script automatically clears all objects\n and creates a whole image annotation.\n You may create your own annotations\n before clicking run, but non-rectangle\n annotations may exhibit unexpected behavior."))
//Checkbox for splitting annotations
def checkLabel = new Label("Split unconnected annotations")
gridPane.add(checkLabel,col++, row, 1, 1)
def splitBox = new CheckBox();
gridPane.add(splitBox, col++, row++,1,1)
//Downsample test section-extra spaces->terrible way to determine column width!
col=0
def downsampleLabel = new Label("Downsample: ")
downsampleLabel.setTooltip(new Tooltip("Increase this if you get an error trying to export the image to ImageJ"));
downsampleLabel.setMinWidth(labelWidth)
def TextField downsampleText = new TextField("8.0");
downsampleText.setMaxWidth( textFieldWidth);
downsampleText.setAlignment(Pos.CENTER_RIGHT)
gridPane.add(downsampleLabel,col++, row, 1, 1)
gridPane.add(downsampleText,col, row++, 1, 1)
//reset the column count whenever starting a new row
col = 0
//Sigma test section
def sigmaLabel = new Label("Sigma: ")
sigmaLabel.setTooltip(new Tooltip("Lower the sigma to remove empty space around annotations, raise it to remove empty spaces within the annotation.\nApplies a gaussian blur."));
def TextField sigmaText = new TextField("4.0");
sigmaText.setMaxWidth( textFieldWidth);
sigmaText.setAlignment(Pos.CENTER_RIGHT)
gridPane.add(sigmaLabel, col++, row, 1, 1)
gridPane.add(sigmaText, col, row++, 1, 1)
col = 0
//lowerThreshold section
def lowerThresholdLabel = new Label("Lower Threshold: ")
lowerThresholdLabel.setTooltip(new Tooltip("No annotation usually means the threshold is too high, full image annotation means the threshold is too low"));
def TextField lowerThresholdText = new TextField("20");
lowerThresholdText.setMaxWidth( textFieldWidth);
lowerThresholdText.setAlignment(Pos.CENTER_RIGHT)
gridPane.add(lowerThresholdLabel, col++, row, 1, 1)
gridPane.add(lowerThresholdText, col, row++, 1, 1)
//upperThreshold section
col=0
def upperThresholdLabel = new Label("Upper Threshold: ")
upperThresholdLabel.setTooltip(new Tooltip("Default is the max bit depth -1"));
def TextField upperThresholdText = new TextField(maxPixel.toString());
upperThresholdText.setMaxWidth( textFieldWidth);
upperThresholdText.setAlignment(Pos.CENTER_RIGHT)
gridPane.add(upperThresholdLabel, col++, row, 1, 1)
gridPane.add(upperThresholdText, col, row++, 1, 1)
def channelLabel = new Label("Final weights will be normalized.")
channelLabel.setTooltip(new Tooltip("I am so bad at programming"));
gridPane.add(channelLabel, 0, row, 1, 1)
def channelLabel2 = new Label("Channel Weights")
channelLabel2.setTooltip(new Tooltip("Any non-negative values"));
gridPane.add(channelLabel2, 1, row++, 1, 1)
//Set up rows for data entry for each fluorescent channel
def channels = []
//Variable to track channel count
int c = 0
ArrayList<Label> channelLabels
ArrayList<TextField> channelWeights
//Pretty sure these could be lists
if (!imageData.getServer().isRGB()) {
channels = getQuPath().getViewer().getImageDisplay().getAvailableChannels()
channelLabels = new ArrayList(channels.size())
channelWeights = new ArrayList(channels.size())
for (channel in channels) {
channelLabels.add( new Label(channel.toString()))
channelWeights.add( new TextField((1/channels.size()).toString()));
channelWeights[c].setMaxWidth( textFieldWidth);
channelWeights[c].setAlignment(Pos.CENTER_RIGHT)
//Add to dialog box, new row for each
col=0
gridPane.add(channelLabels[c], col++, row, 1, 1)
gridPane.add(channelWeights.get(c), col, row++, 1, 1)
c++
}
} else {
//Sloppy but it works to get RGB images included
channels = ["Red","Green","Blue"]
channelLabels = new ArrayList(3)
channelWeights = new ArrayList(3)
for (channel in channels) {
channelLabels.add( new Label(channel))
channelWeights.add( new TextField((1/channels.size()).toString()));
channelWeights[c].setMaxWidth( textFieldWidth);
channelWeights[c].setAlignment(Pos.CENTER_RIGHT)
//Add to dialog box, new row for each
col=0
gridPane.add(channelLabels[c], col++, row, 1, 1)
gridPane.add(channelWeights.get(c), col, row++, 1, 1)
c++
}
}
//Cycle through all channels to set up most of the rest of the dialog box
def runButtonLabel = new Label("This button will always run\n regardless of error message->\nLarge images may be slow.\nPrimarily intended for\nfluorescent images.")
gridPane.add(runButtonLabel, 0, row, 1, 1)
//Finally create a run button to start everything
Button runButton = new Button()
runButton.setText("Run")
gridPane.add(runButton, 1, row++, 1, 1)
runButton.setTooltip(new Tooltip("This may take a little bit of time depending on image size and downsampling."));
runButton.setOnAction {
originalAnnotations = getAnnotationObjects()
//At the moment I don't think any of these values should need anything larger than a float... though if greater bit depths are used this might need changing
float downsample = Float.parseFloat(downsampleText.getText());
float sigma = Float.parseFloat(sigmaText.getText());
float lowerThreshold = Float.parseFloat(lowerThresholdText.getText());
float upperThreshold = Float.parseFloat(upperThresholdText.getText());
def weights = []
//Place all of the final weights into an array that can be read into ImageJ
for (i=0;i<channels.size();i++){
weights.add(Float.parseFloat(channelWeights.get(i).getText()))
}
//Normalize weights
def sum = weights.sum()
if (sum<=0){
print "Please use positive weights"
runButton.setText("Weight error.")
return;
}
for (i=0; i<weights.size(); i++){
weights[i] = weights[i]/sum
}
//[1,2,3,4] format can't be read into ImageJ arrays (or at least I didn't see an easy way), it needs to be converted to 1,2,3,4
def weightList =weights.join(", ")
//Get rid of everything already in the image. Not totally necessary, but useful when I am spamming various values.
def annotations = getAnnotationObjects()
def params = new ImageJMacroRunner(getQuPath()).getParameterList()
// Change the value of a parameter, using the JSON to identify the key
params.getParameters().get('downsampleFactor').setValue(downsample)
params.getParameters().get('sendROI').setValue(false)
params.getParameters().get('sendOverlay').setValue(false)
params.getParameters().get('getOverlay').setValue(false)
if (!getQuPath().getClass().getPackage()?.getImplementationVersion()){
params.getParameters().get('getOverlayAs').setValue('Annotations')
}
params.getParameters().get('getROI').setValue(true)
params.getParameters().get('clearObjects').setValue(false)
// Get the macro text and other required variables
def macro ='original = getImageID();run("Duplicate...", "title=X3t4Y6lEt duplicate");'+
'weights=newArray('+weightList+');run("Stack to Images");name=getTitle();'+
'baseName = substring(name, 0, lengthOf(name)-1);'+
'for (i=0; i<'+channels.size()+';'+
'i++){currentImage = baseName+(i+1);selectWindow(currentImage);'+
'run("Multiply...", "value="+weights[i]);}'+
'run("Images to Stack", "name=Stack title=[X3t4Y6lEt] use");'+
'run("Z Project...", "projection=[Sum Slices]");'+
'run("Gaussian Blur...", "sigma='+sigma+'");'+
'setThreshold('+lowerThreshold+', '+upperThreshold+');run("Convert to Mask");'+
'run("Create Selection");run("Colors...", "foreground=white background=black selection=white");'+
'run("Properties...", "channels=1 slices=1 frames=1 unit=um pixel_width='+pixelSize+' pixel_height='+pixelSize+' voxel_depth=1");'+
'selectImage(original);run("Restore Selection");'
def macroRGB = 'weights=newArray('+weightList+');'+
'original = getImageID();run("Duplicate...", " ");'+
'run("Make Composite");run("Stack to Images");'+
'selectWindow("Red");rename("Red X3t4Y6lEt");run("Multiply...", "value="+weights[0]);'+
'selectWindow("Green");rename("Green X3t4Y6lEt");run("Multiply...", "value="+weights[1]);'+
'selectWindow("Blue");rename("Blue X3t4Y6lEt");run("Multiply...", "value="+weights[2]);'+
'run("Images to Stack", "name=Stack title=[X3t4Y6lEt] use");'+
'run("Z Project...", "projection=[Sum Slices]");'+
'run("Gaussian Blur...", "sigma='+sigma+'");'+
'setThreshold('+lowerThreshold+', '+upperThreshold+');run("Convert to Mask");'+
'run("Create Selection");run("Colors...", "foreground=white background=black selection=cyan");'+
'run("Properties...", "channels=1 slices=1 frames=1 unit=um pixel_width='+pixelSize+' pixel_height='+pixelSize+' voxel_depth=1");'+
'selectImage(original);run("Restore Selection");'
for (annotation in annotations) {
//Check if we need to use the RGB version
if (imageData.getServer().isRGB()) {
ImageJMacroRunner.runMacro(params, imageData, null, annotation, macroRGB)
} else{ ImageJMacroRunner.runMacro(params, imageData, null, annotation, macro)}
}
//remove whole image annotation and lock the new annotation
removeObjects(annotations,true)
if (splitBox.isSelected()){
def areaAnnotations = getAnnotationObjects().findAll {it.getROI() instanceof AreaROI}
areaAnnotations.each { selected ->
def polygons = PathROIToolsAwt.splitAreaToPolygons(selected.getROI())
def newPolygons = polygons[1].collect {
updated = it
for (hole in polygons[0])
updated = PathROIToolsAwt.combineROIs(updated, hole, PathROIToolsAwt.CombineOp.SUBTRACT)
return updated
}
// Remove original annotation, add new ones
annotations = newPolygons.collect {new PathAnnotationObject(it)}
resetSelection()
removeObject(selected, true)
addObjects(annotations)
}
}
//Otherwise setLocked generates an error if no annotation was created
getAnnotationObjects().each{it.setLocked(true)}
runButton.setText("Run again?")
}
//Reset button to keep re-trying the same beginning annotation rather than continuing within resulting annotation
Button resetButton = new Button()
resetButton.setText("Reset?")
gridPane.add(resetButton, 0, ++row, 1, 1)
resetButton.setTooltip(new Tooltip("Clears all annotations and creates the pre-Run annotation."));
resetButton.setOnAction {
clearAllObjects()
addObjects(originalAnnotations)
getAnnotationObjects().each{it.setLocked(true)}
}
def warningLabel = new Label("These buttons will split your annotations regardless\nof checkbox at top.")
gridPane.add(warningLabel, 0, ++row, 2, 1)
//Option to remove small sized annotation areas. Requires pixel size
Button clipButton = new Button()
clipButton.setText("Remove Small")
gridPane.add(clipButton, 0, ++row, 1, 1)
clipButton.setTooltip(new Tooltip("Remove annotations below the indicated area IN SQUARE MICRONS.\nHave not made a version that works for this without pixel size."));
def TextField clipSizeText = new TextField("50");
clipSizeText.setMaxWidth( textFieldWidth);
clipSizeText.setAlignment(Pos.CENTER_RIGHT)
gridPane.add(clipSizeText, 1, row, 1, 1)
clipSizeText.setTooltip(new Tooltip("Remove annotations below the indicated area IN SQUARE MICRONS.\nHave not made a version that works for this without pixel size."));
//Clip button goes with the Remove Small button on the dialog, to remove objects below the text box amount in um^2
clipButton.setOnAction {
def areaAnnotations = getAnnotationObjects().findAll {it.getROI() instanceof AreaROI}
for (section in areaAnnotations){
def polygons = PathROIToolsAwt.splitAreaToPolygons(section.getROI())
def newPolygons = polygons[1].collect {
updated = it
for (hole in polygons[0])
updated = PathROIToolsAwt.combineROIs(updated, hole, PathROIToolsAwt.CombineOp.SUBTRACT)
return updated
}
// Remove original annotation, add new ones
annotations = newPolygons.collect {new PathAnnotationObject(it)}
removeObject(section, true)
addObjects(annotations)
}
//PART2
double pixelWidth = server.getPixelWidthMicrons()
double pixelHeight = server.getPixelHeightMicrons()
def smallAnnotations = getAnnotationObjects().findAll {it.getROI().getScaledArea(pixelWidth, pixelHeight) < Double.parseDouble(clipSizeText.getText());}
println("small "+smallAnnotations)
removeObjects(smallAnnotations, true)
fireHierarchyUpdate()
}
//Fill holes option
Button fillButton = new Button()
fillButton.setText("Fill holes")
gridPane.add(fillButton, 0, ++row, 1, 1)
fillButton.setTooltip(new Tooltip("Fill in annotation holes less than the indicated area IN SQUARE MICRONS.\nHave not made a version that works for this without pixel size."));
def TextField fillSizeText = new TextField("50");
fillSizeText.setMaxWidth( textFieldWidth);
fillSizeText.setAlignment(Pos.CENTER_RIGHT)
gridPane.add(fillSizeText, 1, row, 1, 1)
fillSizeText.setTooltip(new Tooltip("Fill in annotations holes less than the indicated area IN SQUARE MICRONS.\nHave not made a version that works for this without pixel size."));
//Clip button goes with the Remove Small button on the dialog, to remove objects below the text box amount in um^2
fillButton.setOnAction {
// Get selected objects
// If you're willing to loop over all annotation objects, for example, then use getAnnotationObjects() instead
def pathObjects = getAnnotationObjects()
// Create a list of objects to remove, add their replacements
def toRemove = []
def toAdd = []
for (pathObject in pathObjects) {
def roi = pathObject.getROI()
// AreaROIs are the only kind that might have holes
if (roi instanceof AreaROI ) {
// Extract exterior polygons
def polygons = PathROIToolsAwt.splitAreaToPolygons(roi)[1] as List
// If we have multiple polygons, merge them
def roiNew = polygons.remove(0)
def roiNegative = PathROIToolsAwt.splitAreaToPolygons(roi)[0] as List
for (temp in polygons){
roiNew = PathROIToolsAwt.combineROIs(temp, roiNew, PathROIToolsAwt.CombineOp.ADD)
}
for (temp in roiNegative){
if (temp.getArea() > Double.parseDouble(fillSizeText.getText())/pixelSize/pixelSize){
roiNew = PathROIToolsAwt.combineROIs(roiNew, temp, PathROIToolsAwt.CombineOp.SUBTRACT)
}
}
// Create a new annotation
toAdd << new PathAnnotationObject(roiNew, pathObject.getPathClass())
toRemove << pathObject
}
}
// Remove & add objects as required
def hierarchy = getCurrentHierarchy()
hierarchy.getSelectionModel().clearSelection()
hierarchy.removeObjects(toRemove, true)
hierarchy.addPathObjects(toAdd, false)
}
//Some stuff that controls the dialog box showing up. I don't really understand it but it is needed.
Platform.runLater {
def stage = new Stage()
stage.initOwner(QuPathGUI.getInstance().getStage())
stage.setScene(new Scene( gridPane))
stage.setTitle("Another Tissue Detection ")
stage.setWidth(350);
stage.setHeight(800);
//stage.setResizable(false);
stage.show()
}
//v1.2 WATCH FOR COMPLETION MESSAGE IN LOG, TAKES A LONG TIME IN LARGE IMAGES
//This version strips out the user interface and most options, and replaces them with variables at the beginning of the script.
//I recommend using the UI version to figure out your settings, and this version to run as part of a workflow.
//Possibly place whole image annotation with another type
createSelectAllObject(true);
def sigma = 2
def downsample = 15
def lowerThreshold = 3300
//calculate bit depth for initially suggested upper threhsold, replace the value with the Math.pow line or maxPixel variable
//int maxPixel = Math.pow((double) 2,(double)server.getBitsPerPixel())-1
def upperThreshold = 65535
def weights = [0,1,1,0]
//Remove smaller than
def smallestAnnotations = 1500
def fillHolesSmallerThan = 15000
//For detection of small objects, not included in GUI version.
def removeLargerThan = 99999999999999
import qupath.lib.gui.QuPathGUI
import qupath.imagej.plugins.ImageJMacroRunner
import qupath.lib.plugins.parameters.ParameterList
import qupath.lib.roi.*
import qupath.lib.objects.*
def imageData = getCurrentImageData()
def server = imageData.getServer()
def pixelSize = server.getPixelHeightMicrons()
//Place all of the final weights into an array that can be read into ImageJ
//Normalize weights so that sum =1
def sum = weights.sum()
if (sum<=0){
print "Please use positive weights"
return;
}
for (i=0; i<weights.size(); i++){
weights[i] = weights[i]/sum
}
//[1,2,3,4] format can't be read into ImageJ arrays (or at least I didn't see an easy way), it needs to be converted to 1,2,3,4
def weightList =weights.join(", ")
//Get rid of everything already in the image. Not totally necessary, but useful when I am spamming various values.
def annotations = getAnnotationObjects()
def params = new ImageJMacroRunner(getQuPath()).getParameterList()
// Change the value of a parameter, using the JSON to identify the key
params.getParameters().get('downsampleFactor').setValue(downsample)
params.getParameters().get('sendROI').setValue(false)
params.getParameters().get('sendOverlay').setValue(false)
params.getParameters().get('getOverlay').setValue(false)
if (!getQuPath().getClass().getPackage()?.getImplementationVersion()){
params.getParameters().get('getOverlayAs').setValue('Annotations')
}
params.getParameters().get('getROI').setValue(true)
params.getParameters().get('clearObjects').setValue(false)
// Get the macro text and other required variables
def macro ='original = getImageID();run("Duplicate...", "title=X3t4Y6lEt duplicate");'+
'weights=newArray('+weightList+');run("Stack to Images");name=getTitle();'+
'baseName = substring(name, 0, lengthOf(name)-1);'+
'for (i=0; i<'+weights.size()+';'+
'i++){currentImage = baseName+(i+1);selectWindow(currentImage);'+
'run("Multiply...", "value="+weights[i]);}'+
'run("Images to Stack", "name=Stack title=[X3t4Y6lEt] use");'+
'run("Z Project...", "projection=[Sum Slices]");'+
'run("Gaussian Blur...", "sigma='+sigma+'");'+
'setThreshold('+lowerThreshold+', '+upperThreshold+');run("Convert to Mask");'+
'run("Create Selection");run("Colors...", "foreground=white background=black selection=white");'+
'run("Properties...", "channels=1 slices=1 frames=1 unit=um pixel_width='+pixelSize+' pixel_height='+pixelSize+' voxel_depth=1");'+
'selectImage(original);run("Restore Selection");'
def macroRGB = 'weights=newArray('+weightList+');'+
'original = getImageID();run("Duplicate...", " ");'+
'run("Make Composite");run("Stack to Images");'+
'selectWindow("Red");rename("Red X3t4Y6lEt");run("Multiply...", "value="+weights[0]);'+
'selectWindow("Green");rename("Green X3t4Y6lEt");run("Multiply...", "value="+weights[1]);'+
'selectWindow("Blue");rename("Blue X3t4Y6lEt");run("Multiply...", "value="+weights[2]);'+
'run("Images to Stack", "name=Stack title=[X3t4Y6lEt] use");'+
'run("Z Project...", "projection=[Sum Slices]");'+
'run("Gaussian Blur...", "sigma='+sigma+'");'+
'setThreshold('+lowerThreshold+', '+upperThreshold+');run("Convert to Mask");'+
'run("Create Selection");run("Colors...", "foreground=white background=black selection=cyan");'+
'run("Properties...", "channels=1 slices=1 frames=1 unit=um pixel_width='+pixelSize+' pixel_height='+pixelSize+' voxel_depth=1");'+
'selectImage(original);run("Restore Selection");'
for (annotation in annotations) {
//Check if we need to use the RGB version
if (imageData.getServer().isRGB()) {
ImageJMacroRunner.runMacro(params, imageData, null, annotation, macroRGB)
} else{ ImageJMacroRunner.runMacro(params, imageData, null, annotation, macro)}
}
//remove whole image annotation and lock the new annotation
removeObjects(annotations,true)
//Option to remove small sized annotation areas. Requires pixel size
//Clip button goes with the Remove Small button on the dialog, to remove objects below the text box amount in um^2
def areaAnnotations = getAnnotationObjects().findAll {it.getROI() instanceof AreaROI}
for (section in areaAnnotations){
def polygons = PathROIToolsAwt.splitAreaToPolygons(section.getROI())
def newPolygons = polygons[1].collect {
updated = it
for (hole in polygons[0])
updated = PathROIToolsAwt.combineROIs(updated, hole, PathROIToolsAwt.CombineOp.SUBTRACT)
return updated
}
// Remove original annotation, add new ones
annotations = newPolygons.collect {new PathAnnotationObject(it)}
removeObject(section, true)
addObjects(annotations)
}
//PART2
double pixelWidth = server.getPixelWidthMicrons()
double pixelHeight = server.getPixelHeightMicrons()
def smallAnnotations = getAnnotationObjects().findAll {it.getROI().getScaledArea(pixelWidth, pixelHeight) < smallestAnnotations}
println("small "+smallAnnotations)
removeObjects(smallAnnotations, true)
fireHierarchyUpdate()
// Get selected objects
// If you're willing to loop over all annotation objects, for example, then use getAnnotationObjects() instead
def pathObjects = getAnnotationObjects()
// Create a list of objects to remove, add their replacements
def toRemove = []
def toAdd = []
for (pathObject in pathObjects) {
def roi = pathObject.getROI()
// AreaROIs are the only kind that might have holes
if (roi instanceof AreaROI ) {
// Extract exterior polygons
def polygons = PathROIToolsAwt.splitAreaToPolygons(roi)[1] as List
// If we have multiple polygons, merge them
def roiNew = polygons.remove(0)
def roiNegative = PathROIToolsAwt.splitAreaToPolygons(roi)[0] as List
for (temp in polygons){
roiNew = PathROIToolsAwt.combineROIs(temp, roiNew, PathROIToolsAwt.CombineOp.ADD)
}
for (temp in roiNegative){
if (temp.getArea() > fillHolesSmallerThan/pixelSize/pixelSize){
roiNew = PathROIToolsAwt.combineROIs(roiNew, temp, PathROIToolsAwt.CombineOp.SUBTRACT)
}
}
// Create a new annotation
toAdd << new PathAnnotationObject(roiNew, pathObject.getPathClass())
toRemove << pathObject
}
}
// Remove & add objects as required
def hierarchy = getCurrentHierarchy()
hierarchy.getSelectionModel().clearSelection()
hierarchy.removeObjects(toRemove, true)
hierarchy.addPathObjects(toAdd, false)
def largeAnnotations = getAnnotationObjects().findAll {it.getROI().getScaledArea(pixelSize, pixelSize) > removeLargerThan}
removeObjects(largeAnnotations, true)
getAnnotationObjects().each{it.setLocked(true)}
//uncomment to merge final results into single line in annotations table
//selectAnnotations()
//mergeSelectedAnnotations()
println("Annotation areas completed")
/**
* Script to help with annotating tumor regions, separating the tumor margin from the center.
*
* Here, each of the margin regions is approximately 500 microns in width.
*
* @author Pete Bankhead
*/
import qupath.lib.common.GeneralTools
import qupath.lib.objects.PathAnnotationObject
import qupath.lib.objects.PathObject
import qupath.lib.roi.PathROIToolsAwt
import java.awt.Rectangle
import java.awt.geom.Area
import static qupath.lib.scripting.QPEx.*
//-----
// Some things you might want to change
// How much to expand each region
double expandMarginMicrons = 500.0
// Define the colors
def coloInnerMargin = getColorRGB(0, 0, 200)
def colorOuterMargin = getColorRGB(0, 200, 0)
def colorCentral = getColorRGB(0, 0, 0)
// Choose whether to lock the annotations or not (it's generally a good idea to avoid accidentally moving them)
def lockAnnotations = true
//-----
// Extract the main info we need
def imageData = getCurrentImageData()
def hierarchy = imageData.getHierarchy()
def server = imageData.getServer()
// We need the pixel size
if (!server.hasPixelSizeMicrons()) {
print 'We need the pixel size information here!'
return
}
if (!GeneralTools.almostTheSame(server.getPixelWidthMicrons(), server.getPixelHeightMicrons(), 0.0001)) {
print 'Warning! The pixel width & height are different; the average of both will be used'
}
// Get annotation & detections
def annotations = getAnnotationObjects()
def selected = getSelectedObject()
if (selected == null || !selected.isAnnotation()) {
print 'Please select an annotation object!'
return
}
// We need one selected annotation as a starting point; if we have other annotations, they will constrain the output
annotations.remove(selected)
// If we have at most one other annotation, it represents the tissue
Area areaTissue
PathObject tissueAnnotation
if (annotations.isEmpty()) {
areaTissue = new Area(new Rectangle(0, 0, server.getWidth(), server.getHeight()))
} else if (annotations.size() == 1) {
tissueAnnotation = annotations.get(0)
areaTissue = PathROIToolsAwt.getArea(tissueAnnotation.getROI())
} else {
print 'Sorry, this script only support one selected annotation for the tumor region, and at most one other annotation to constrain the expansion'
return
}
// Calculate how much to expand
double expandPixels = expandMarginMicrons / server.getAveragedPixelSizeMicrons()
def roiOriginal = selected.getROI()
def areaTumor = PathROIToolsAwt.getArea(roiOriginal)
// Get the outer margin area
def areaOuter = PathROIToolsAwt.shapeMorphology(areaTumor, expandPixels)
areaOuter.subtract(areaTumor)
areaOuter.intersect(areaTissue)
def roiOuter = PathROIToolsAwt.getShapeROI(areaOuter, roiOriginal.getC(), roiOriginal.getZ(), roiOriginal.getT())
def annotationOuter = new PathAnnotationObject(roiOuter)
annotationOuter.setName("Outer margin")
annotationOuter.setColorRGB(colorOuterMargin)
// Get the central area
def areaCentral = PathROIToolsAwt.shapeMorphology(areaTumor, -expandPixels)
areaCentral.intersect(areaTissue)
def roiCentral = PathROIToolsAwt.getShapeROI(areaCentral, roiOriginal.getC(), roiOriginal.getZ(), roiOriginal.getT())
def annotationCentral = new PathAnnotationObject(roiCentral)
annotationCentral.setName("Center")
annotationCentral.setColorRGB(colorCentral)
// Get the inner margin area
areaInner = areaTumor
areaInner.subtract(areaCentral)
areaInner.intersect(areaTissue)
def roiInner = PathROIToolsAwt.getShapeROI(areaInner, roiOriginal.getC(), roiOriginal.getZ(), roiOriginal.getT())
def annotationInner = new PathAnnotationObject(roiInner)
annotationInner.setName("Inner margin")
annotationInner.setColorRGB(coloInnerMargin)
// Add the annotations
hierarchy.getSelectionModel().clearSelection()
hierarchy.removeObject(selected, true)
def annotationsToAdd = [annotationOuter, annotationInner, annotationCentral];
annotationsToAdd.each {it.setLocked(lockAnnotations)}
if (tissueAnnotation == null) {
hierarchy.addPathObjects(annotationsToAdd, false)
} else {
tissueAnnotation.addPathObjects(annotationsToAdd)
hierarchy.fireHierarchyChangedEvent(this, tissueAnnotation)
if (lockAnnotations)
tissueAnnotation.setLocked(true)
}
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.