Scripts that are complex enough not to fit in any of the other simple categories. | |
TOC | |
Affine transform objects between images.groovy - used with the Align images experimental tool in v0.2.0m1 | |
Background staining check - Takes annotations, expands an area around them, checks the staining level in that area, | |
then deletes all expanded areas and any original areas that violate some condition. Can help check for staining artifacts. | |
Classifier with GUI.groovy - User interface based macro to simplify classifying many possible channels. Also generates all | |
possible combinations of base classes (double, triple, etc positives). | |
*Added updated version for 0.2.0M5 | |
Classifier with no GUI.groovy - Same as above but streamlined for use in a script, with no user interaction. | |
*Added Detection based versions of both above scripts which should work for tiles. | |
DBSCAN 0.2.0.groovy - Implementation of DBSCAN for cluster analysis. | |
Hotspot Detection 0.2.0M8.groovy - Detecting clusters of cells above a certain density and size threshold, written for 0.2.0M8 | |
Invasion assay or tumor adjacent area.groovy - Creating areas of increasing distance from the tumor annotation border. Use negative | |
values in the annotation expansion for invasion assays. | |
Lipid detection and measurement.groovy - Detects lighter areas within your tissue area and creates detection objects with measurements. | |
Multiple cell detections.groovy - Set of scripts that allow the user to run one cell detection, store those results, run a second cell | |
detection, and then import the results of the first. Useful when one set of Cell detection variables does not accurately detect all | |
of your cells. | |
Positive Pixel scripting for QP 1.2.groovy - Demonstrates ways to succesfully use positive pixel detection to handle difficult staining. | |
Positive Pixel scripting for QP 1.3.groovy - Same as above but modified for alterations to positive pixel detection in 1.3 | |
R-squared.groovy - GUI based R squared calculator-allows selection of objects by class. Only works for detections. | |
Added Plots and ability to save/export results of multiple calculations | |
R-squared pixel values.groovy - GUI based R squared calculator for pixel values in objects, combination of the R-squared and | |
colocalization scripts | |
RareCellFetcher-allAnnotations.groovy - Totally higher class than CellTinder or CellRoulette. | |
Step 1 through Step 4 - Part of a workflow for semiautomated generation of very high resolution cells. User defines the cytoplasm. | |
https://groups.google.com/forum/#!msg/qupath-users/ehxID096NV8/U7n5_CNABwAJ | |
Tissue detection (for workflow) - Two scripts that mimic Simple Tissue Detection from QuPath, but give more channel flexibility when | |
working with fluorescent images. Normally QuPath will only use the first channel for tissue detection, while this will let you choose | |
the balance between channels that works best for you. Workflow version removes the GUI. | |
https://groups.google.com/forum/#!topic/qupath-users/4g26bLOC_CE | |
Tissue detection m5 - update to the scripts to work with version m5 | |
Tumor Region Measurements - Script is from Pete and can be used for measurements in and around a tumor. | |
https://petebankhead.github.io/qupath/scripts/2018/08/08/three-regions.html | |
Updated Jan 2019 with Classifier scripts to make it easier to... well, classify. https://groups.google.com/forum/#!topic/qupath-users/LMxYihQMvTw | |
Updated Dec 2018 with a Tissue Detection script that can act in a similar (rough) fashion to simple tissue detection, but has the | |
advantage of allowing the user to choose and weight channels. This makes it possible to look at specific areas within tissue | |
samples, even in 7-8 color images. See here for examples and an explanation: https://groups.google.com/forum/#!topic/qupath-users/4g26bLOC_CE |
/** QUPATH 0.2.0m1 | |
* Script to transfer QuPath objects from one image to another, applying an AffineTransform to any ROIs. | |
* https://forum.image.sc/t/interactive-image-alignment/23745/8 | |
*/ | |
// SET ME! Define transformation matrix | |
// Get this from 'Interactive image alignment (experimental) | |
def matrix = [ | |
-0.998, -0.070, 127256.994, | |
0.070, -0.998, 72627.371 | |
] | |
// SET ME! Define image containing the original objects (must be in the current project) | |
def otherImageName = null | |
// SET ME! Delete existing objects | |
def deleteExisting = true | |
// SET ME! Change this if things end up in the wrong place | |
def createInverse = true | |
import qupath.lib.gui.helpers.DisplayHelpers | |
import qupath.lib.objects.PathCellObject | |
import qupath.lib.objects.PathDetectionObject | |
import qupath.lib.objects.PathObject | |
import qupath.lib.objects.PathObjects | |
import qupath.lib.objects.PathTileObject | |
import qupath.lib.roi.PathROIToolsAwt | |
import qupath.lib.roi.interfaces.ROI | |
import java.awt.geom.AffineTransform | |
import static qupath.lib.gui.scripting.QPEx.* | |
if (otherImageName == null) { | |
DisplayHelpers.showErrorNotification("Transform objects", "Please specify an image name in the script!") | |
return | |
} | |
// Get the project & the requested image name | |
def project = getProject() | |
def entry = project.getImageList().find {it.getImageName() == otherImageName} | |
if (entry == null) { | |
print 'Could not find image with name ' + otherImageName | |
return | |
} | |
def otherHierarchy = entry.readHierarchy() | |
def pathObjects = otherHierarchy.getRootObject().getChildObjects() | |
// Define the transformation matrix | |
def transform = new AffineTransform( | |
matrix[0], matrix[3], matrix[1], | |
matrix[4], matrix[2], matrix[5] | |
) | |
if (createInverse) | |
transform = transform.createInverse() | |
if (deleteExisting) | |
clearAllObjects() | |
def newObjects = [] | |
for (pathObject in pathObjects) { | |
newObjects << transformObject(pathObject, transform) | |
} | |
addObjects(newObjects) | |
print 'Done!' | |
/** | |
* Transform object, recursively transforming all child objects | |
* | |
* @param pathObject | |
* @param transform | |
* @return | |
*/ | |
PathObject transformObject(PathObject pathObject, AffineTransform transform) { | |
// Create a new object with the converted ROI | |
def roi = pathObject.getROI() | |
def roi2 = transformROI(roi, transform) | |
def newObject = null | |
if (pathObject instanceof PathCellObject) { | |
def nucleusROI = pathObject.getNucleusROI() | |
if (nucleusROI == null) | |
newObject = PathObjects.createCellObject(roi2, pathObject.getPathClass(), pathObject.getMeasurementList()) | |
else | |
newObject = PathObjects.createCellObject(roi2, transformROI(nucleusROI, transform), pathObject.getPathClass(), pathObject.getMeasurementList()) | |
} else if (pathObject instanceof PathTileObject) { | |
newObject = PathObjects.createTileObject(roi2, pathObject.getPathClass(), pathObject.getMeasurementList()) | |
} else if (pathObject instanceof PathDetectionObject) { | |
newObject = PathObjects.createDetectionObject(roi2, pathObject.getPathClass(), pathObject.getMeasurementList()) | |
} else { | |
newObject = PathObjects.createAnnotationObject(roi2, pathObject.getPathClass(), pathObject.getMeasurementList()) | |
} | |
// Handle child objects | |
if (pathObject.hasChildren()) { | |
newObject.addPathObjects(pathObject.getChildObjects().collect({transformObject(it, transform)})) | |
} | |
return newObject | |
} | |
/** | |
* Transform ROI (via conversion to Java AWT shape) | |
* | |
* @param roi | |
* @param transform | |
* @return | |
*/ | |
ROI transformROI(ROI roi, AffineTransform transform) { | |
def shape = PathROIToolsAwt.getShape(roi) // Should be able to use roi.getShape() - but there's currently a bug in it for rectangles/ellipses! | |
shape2 = transform.createTransformedShape(shape) | |
return PathROIToolsAwt.getShapeROI(shape2, roi.getC(), roi.getZ(), roi.getT(), 0.5) | |
} |
//Ideally checks one channel for presence above a certain background level, to help remove islets or areas of interest in areas of bad staining | |
//Could be modified to check for stain bubbles, edge staining artifacts, multiple channels, etc. | |
//RESETS ANY CLASSIFICATIONS ALREADY SET. Would require substantial revision to avoid reclassifying annotations. | |
//expansion distance in microns around the annotations that is checked for background, this is also a STRING | |
def expansion = "20.0" | |
def threshold = 5000 | |
//channel variable is part of a String and needs to be exactly correct | |
def channel = "Channel 2" | |
import qupath.lib.roi.* | |
import qupath.lib.objects.* | |
def pixelSize = getCurrentImageData().getServer().getPixelHeightMicrons() | |
hierarchy = getCurrentHierarchy() | |
originals = getAnnotationObjects() | |
classToSubtract = "Original" | |
surroundingClass = "Surrounding" | |
areaClass = "Donut" | |
//set the class on all of the base objects, lots of objects will be created and this helps keep track. | |
originals.each{it.setPathClass(getPathClass(surroundingClass))} | |
selectAnnotations() | |
runPlugin('qupath.lib.plugins.objects.DilateAnnotationPlugin', '{"radiusMicrons": '+expansion+', "removeInterior": false, "constrainToParent": true}'); | |
originals.each{it.setPathClass(getPathClass(classToSubtract))} | |
surroundings = getAnnotationObjects().findAll{it.getPathClass() == getPathClass(surroundingClass)} | |
fireHierarchyUpdate() | |
for (parent in surroundings){ | |
//child object should be of the original annotations, now with classToSubtract | |
child = parent.getChildObjects() | |
updated = PathROIToolsAwt.combineROIs(parent.getROI(), child[0].getROI(), PathROIToolsAwt.CombineOp.SUBTRACT) | |
// Remove original annotation, add new ones | |
annotations = new PathAnnotationObject(updated, getPathClass(areaClass)) | |
addObject(annotations) | |
selectAnnotations().findAll{it.getPathClass() == getPathClass(areaClass)} | |
///////////MAY NEED TO MANUALLY EDIT THIS LINE and "value" below A BIT BASED ON IMAGE/////////////////// | |
runPlugin('qupath.lib.algorithms.IntensityFeaturesPlugin', '{"pixelSizeMicrons": '+pixelSize+', "region": "ROI", "tileSizeMicrons": 25.0, "channel1": false, "channel2": true, "channel3": false, "channel4": false, "doMean": true, "doStdDev": true, "doMinMax": false, "doMedian": false, "doHaralick": false, "haralickMin": 0, "haralickMax": 0, "haralickDistance": 1, "haralickBins": 32}'); | |
donut = getAnnotationObjects().findAll{it.getPathClass()==getPathClass(areaClass)} | |
fireHierarchyUpdate() | |
value = donut[0].getMeasurementList().getMeasurementValue("ROI: 0.32 " + qupath.lib.common.GeneralTools.micrometerSymbol() + " per pixel: "+channel+": Mean") | |
//occasionally the value is NaN for no reason I can figure out. I decided it was safer to keep the results any time | |
//this happens for now, though if the preserved regions end up being problematic the && !value.isNaN should be removed. | |
if ( value > threshold && !value.isNaN()){ | |
println("remove, value was "+value) | |
removeObject(parent, false) | |
removeObject(donut[0], true) | |
} else {println("keep"); | |
removeObject(parent, true); | |
removeObject(donut[0],true) | |
} | |
} | |
fireHierarchyUpdate() | |
//V3 Corrected classification over-write error on classifiers with more than 3 parts | |
import qupath.lib.gui.tools.ColorToolsFX; | |
import javafx.scene.paint.Color; | |
//Hopefully you can simply replace the fileName with your classifier, and include this is a script. | |
fileName = "MyClassifier" | |
positive = [] | |
path = buildFilePath(PROJECT_BASE_DIR, "classifiers",fileName) | |
new File(path).withObjectInputStream { | |
cObj = it.readObject() | |
} | |
//Create an arraylist with the same number of entries as classes | |
CHANNELS = cObj.size() | |
//println(cObj) | |
//set up for classifier | |
def cells = getCellObjects() | |
cells.each {it.setPathClass(getPathClass('Negative'))} | |
//start classifier with all cells negative | |
for (def i=0; i<CHANNELS; i++){ | |
def lower = Float.parseFloat(cObj[i][1]) | |
def upper = Float.parseFloat(cObj[i][3]) | |
//create lists for each measurement, classify cells based off of those measurements | |
positive[i] = cells.findAll {measurement(it, cObj[i][0]) >= lower && measurement(it, cObj[i][0]) <= upper} | |
positive[i].each {it.setPathClass(getPathClass(cObj[i][2]+' positive')); it.getMeasurementList().putMeasurement("ClassDepth", 1)} | |
c = Color.web(cObj[i][4]) | |
currentPathClass = getPathClass(cObj[i][2]+' positive') | |
//for some reason setColor needs to be used here instead of setColorRGB which applies to objects and not classes? | |
currentPathClass.setColor(ColorToolsFX.getRGB(c)) | |
} | |
for (def i=0; i<(CHANNELS-1); i++){ | |
//println(i) | |
int remaining = 0 | |
for (def j = i+1; j<CHANNELS; j++){ | |
remaining +=1 | |
} | |
depth = 2 | |
classifier(cObj[i][2], positive[i], remaining, i) | |
} | |
Set classSet = [] | |
for (object in getCellObjects()) { | |
classSet << object.getPathClass() | |
} | |
List classList = [] | |
classList.addAll(classSet.findAll{it != getPathClass("Negative") }) | |
print("Class list: "+ classList) | |
classList.each{ | |
className = it.getName() | |
cells = getCellObjects().findAll{it.getPathClass() == getPathClass(className)} | |
//remove the " positive" | |
classNameList = className.tokenize(' ')[0] | |
classNameList = classNameList.tokenize(',') | |
classNameList.sort() | |
name = classNameList.join(',') | |
//print name | |
cells.each{it.setPathClass(getPathClass(name+" positive"))} | |
} | |
fireHierarchyUpdate() | |
def classifier (listAName, listA, remainingListSize, position){ | |
//current point in the list of lists, allows access to the measurements needed to figure out what from the current class is also part of the next class | |
for (def y=0; y <remainingListSize; y++){ | |
k = (position+y+1).intValue() | |
// get the measurements needed to determine if a cell is a member of the next class (from listOfLists) | |
def lower = Float.parseFloat(cObj[k][1]) | |
def upper = Float.parseFloat(cObj[k][3]) | |
//intersect the listA with the first of the listOfLists | |
//on the first run, this would take all of Class 1, and compare it with measurements that determine Class 2, resulting in a subset of | |
//Class 1 that meet both criteria | |
def passList = listA.findAll {measurement(it, cObj[k][0]) >= lower && measurement(it, cObj[k][0]) <= upper} | |
newName = cObj[k][2] | |
//Create a new name based off of the current name and the newly compared class | |
// on the first runthrough this would give "Class 1,Class 2 positive" | |
def mergeName = listAName+","+newName | |
passList.each{ | |
if (it.getMeasurementList().getMeasurementValue("ClassDepth") < depth) { | |
it.setPathClass(getPathClass(mergeName+' positive')); | |
it.getMeasurementList().putMeasurement("ClassDepth", depth) | |
} | |
} | |
if (k == (positive.size()-1)){ | |
//println(passList.size()+"number of "+mergeName+" cells passed") | |
for (def z=0; z<CHANNELS; z++){ | |
//println("before"+positive[z].size()) | |
positive[z] = positive[z].minus(passList) | |
//println(z+" after "+positive[z].size()) | |
} | |
depth -=1 | |
return; | |
} else{ | |
def passAlong = remainingListSize-1 | |
//println("passAlong "+passAlong.size()) | |
//println("name for next " +mergeName) | |
depth +=1 | |
classifier(mergeName, passList, passAlong, k) | |
} | |
} | |
} |
//V5 Corrected classification over-write error on classifiers with more than 3 parts | |
//added a correction so that class lists are always in alphabetical order, preventing order mismatches. Hopefully | |
//Updated for M8 | |
import javafx.application.Platform | |
import javafx.beans.property.SimpleLongProperty | |
import javafx.geometry.Insets | |
import javafx.scene.Scene | |
import javafx.geometry.Pos | |
import javafx.scene.control.Button | |
import javafx.scene.control.Label | |
import javafx.scene.control.TableView | |
import javafx.scene.control.TextField | |
import javafx.scene.control.CheckBox | |
import javafx.scene.control.ComboBox | |
import javafx.scene.control.TableColumn | |
import javafx.scene.control.ColorPicker | |
import javafx.scene.layout.BorderPane | |
import javafx.scene.layout.GridPane | |
import javafx.scene.control.Tooltip | |
import javafx.stage.Stage | |
import qupath.lib.gui.QuPathGUI | |
import qupath.lib.gui.tools.ColorToolsFX; | |
import javafx.scene.paint.Color; | |
//Settings to control the dialog boxes for the GUI | |
int col = 0 | |
int row = 0 | |
int textFieldWidth = 120 | |
int labelWidth = 150 | |
def gridPane = new GridPane() | |
gridPane.setPadding(new Insets(10, 10, 10, 10)); | |
gridPane.setVgap(2); | |
gridPane.setHgap(10); | |
def server = getCurrentImageData().getServer() | |
//Upper thresholds will default to the max bit depth, since that is likely the most common upper limit for a given image. | |
def metadata = getCurrentImageData().getServer().getOriginalMetadata() | |
def pixelSize = metadata.pixelCalibration.pixelWidth.value | |
maxPixel = Math.pow((double) 2,(double)server.getPixelType().getBitsPerPixel())-1 | |
positive = [] | |
//print(maxPixel) | |
def titleLabel = new Label("Intended for use where one marker determines a base class.\nFor example, you could use Channel 1 Cytoplasmic Mean and Channel 2 Nuclear Mean\nto generate two base classes and a Double positive class where each condition is true.\n\n") | |
gridPane.add(titleLabel,col, row++, 3, 1) | |
def requestLabel = new Label("How many base classes/single measurements are you interested in?\nThe above example would have two.\n") | |
gridPane.add(requestLabel,col, row++, 3, 1) | |
def TextField classText = new TextField("2"); | |
classText.setMaxWidth( textFieldWidth); | |
classText.setAlignment(Pos.CENTER_RIGHT) | |
gridPane.add(classText, col++, row, 1, 1) | |
//ArrayList<Label> channelLabels | |
Button startButton = new Button() | |
startButton.setText("Start Classifying") | |
gridPane.add(startButton, col, row++, 1, 1) | |
startButton.setTooltip(new Tooltip("If you need to change the number of classes, re-run the script")); | |
col = 0 | |
row+=10 //spacer | |
def loadLabel = new Label("Load a classifier:") | |
gridPane.add(loadLabel,col++, row, 2, 1) | |
def TextField classFile = new TextField("MyClassifier"); | |
classFile.setMaxWidth( textFieldWidth); | |
classFile.setAlignment(Pos.CENTER_RIGHT) | |
gridPane.add( classFile, col++, row, 1, 1) | |
Button loadButton = new Button() | |
loadButton.setText("Load Classifier") | |
gridPane.add(loadButton, col++, row++, 1, 1) | |
//incredibly lazy and sloppy coding, just a copy and paste taking slightly different inputs | |
loadButton.setOnAction{ | |
path = buildFilePath(PROJECT_BASE_DIR, "classifiers",classFile.getText()) | |
new File(path).withObjectInputStream { | |
cObj = it.readObject() | |
} | |
//Create an arraylist with the same number of entries as classes | |
CHANNELS = cObj.size() | |
col = 0 | |
row = 0 | |
def secondGridPane = new GridPane() | |
secondGridPane.setPadding(new Insets(10, 10, 10, 10)); | |
secondGridPane.setVgap(2); | |
secondGridPane.setHgap(10); | |
def assist = new Label("Short Class Names are recommended as dual positives and beyond use the full names of all positive classes.\n ") | |
secondGridPane.add(assist, col, row++, 5, 1) | |
def mChoice = new Label("Measurement ") | |
mChoice.setMaxWidth(400) | |
mChoice.setAlignment(Pos.CENTER_RIGHT) | |
def mLowThresh = new Label("Lower Threshold <= ") | |
def mHighThresh = new Label("<= Upper Threshold ") | |
def mClassName = new Label("Class Name ") | |
secondGridPane.add( mChoice, col++, row, 1,1) | |
secondGridPane.add( mLowThresh, col++, row, 1,1) | |
secondGridPane.add( mClassName, col++, row, 1,1) | |
secondGridPane.add( mHighThresh, col, row++, 1,1) | |
//create data structures to use for building the classifier | |
boxes = new ComboBox [CHANNELS] | |
lowerTs = new TextField [CHANNELS] | |
classList = new TextField [CHANNELS] | |
upperTs = new TextField [CHANNELS] | |
colorPickers = new ColorPicker [CHANNELS] | |
//create the dialog where the user will select the measurements of interest and values | |
for (def i=0; i<CHANNELS;i++) { | |
col =0 | |
//Add to dialog box, new row for each | |
boxes[i] = new ComboBox() | |
qupath.lib.classifiers.PathClassifierTools.getAvailableFeatures(getDetectionObjects()).each {boxes[i].getItems().add(it) } | |
boxes[i].setValue(cObj[i][0]) | |
classList[i] = new TextField(cObj[i][2]) | |
lowerTs[i] = new TextField(cObj[i][1]) | |
upperTs[i] = new TextField(cObj[i][3]) | |
classList[i].setMaxWidth( textFieldWidth); | |
classList[i].setAlignment(Pos.CENTER_RIGHT) | |
lowerTs[i].setMaxWidth( textFieldWidth); | |
lowerTs[i].setAlignment(Pos.CENTER_RIGHT) | |
upperTs[i].setMaxWidth( textFieldWidth); | |
upperTs[i].setAlignment(Pos.CENTER_RIGHT) | |
colorPickers[i] = new ColorPicker(Color.web(cObj[i][4])) | |
secondGridPane.add(boxes[i], col++, row, 1,1) | |
secondGridPane.add(lowerTs[i], col++, row, 1,1) | |
secondGridPane.add(classList[i], col++, row, 1, 1) | |
secondGridPane.add(upperTs[i], col++, row, 1,1) | |
secondGridPane.add(colorPickers[i], col++, row++, 1,1) | |
} | |
Button runButton = new Button() | |
runButton.setText("Run Classifier") | |
secondGridPane.add(runButton, 0, row++, 1, 1) | |
//All stuff for actually classifying cells | |
runButton.setOnAction { | |
//set up for classifier | |
def cells = getCellObjects() | |
cells.each {it.setPathClass(getPathClass('Negative'))} | |
startTime = System.currentTimeMillis() | |
//start classifier with all cells negative | |
for (def i=0; i<CHANNELS; i++){ | |
def lower = Float.parseFloat(lowerTs[i].getText()) | |
def upper = Float.parseFloat(upperTs[i].getText()) | |
//create lists for each measurement, classify cells based off of those measurements | |
positive[i] = cells.findAll {measurement(it, boxes[i].getValue()) >= lower && measurement(it, boxes[i].getValue()) <= upper} | |
positive[i].each {it.setPathClass(getPathClass(classList[i].getText()+' positive')); it.getMeasurementList().putMeasurement("ClassDepth", 1)} | |
c = colorPickers[i].getValue() | |
currentPathClass = getPathClass(classList[i].getText()+' positive') | |
//for some reason setColor needs to be used here instead of setColorRGB which applies to objects and not classes? | |
currentPathClass.setColor(ColorToolsFX.getRGB(c)) | |
} | |
//Call the classifier on each list of positive single class cells, except for the last one! | |
for (def i=0; i<(CHANNELS-1); i++){ | |
println("ROUND "+i) | |
int remaining = 0 | |
for (def j = i+1; j<CHANNELS; j++){ | |
remaining +=1 | |
} | |
//println("SENDING CELLS TO CLASSIFIER "+positive[i].size()) | |
depth = 2 | |
classifier(classList[i].getText(), positive[i], remaining, i) | |
} | |
//A desperate attempt to fix the possibility of class name mismatch between slides | |
Set classSet = [] | |
for (object in getCellObjects()) { | |
classSet << object.getPathClass() | |
} | |
List classyList = [] | |
classyList.addAll(classSet.findAll{it != getPathClass("Negative") }) | |
classyList.each{ | |
className = it.getName() | |
cells = getCellObjects().findAll{it.getPathClass() == getPathClass(className)} | |
//remove the " positive" | |
classNameList = className.tokenize(' ')[0] | |
classNameList = classNameList.tokenize(',') | |
classNameList.sort() | |
name = classNameList.join(',') | |
//print name | |
cells.each{it.setPathClass(getPathClass(name+" positive"))} | |
} | |
println("clasifier done") | |
fireHierarchyUpdate() | |
} | |
//end Run Button | |
row+=10 //spacer | |
Button saveButton = new Button() | |
saveButton.setText("Save Classifier") | |
secondGridPane.add(saveButton, 1, row, 1, 1) | |
def TextField saveFile = new TextField("MyClassifier"); | |
saveFile.setMaxWidth( textFieldWidth); | |
saveFile.setAlignment(Pos.CENTER_RIGHT) | |
secondGridPane.add( saveFile, 2, row++, 1, 1) | |
//All stuff for actually classifying cells | |
saveButton.setOnAction { | |
def export = [] | |
for (def l=0; l<CHANNELS;l++){ | |
export << [boxes[l].getValue(), lowerTs[l].getText(), classList[l].getText(), upperTs[l].getText(), colorPickers[l].getValue().toString()] | |
} | |
mkdirs(buildFilePath(PROJECT_BASE_DIR, "classifiers")) | |
path = buildFilePath(PROJECT_BASE_DIR, "classifiers",saveFile.getText()) | |
new File(path).withObjectOutputStream { | |
it.writeObject(export) | |
} | |
} | |
//End of classifier window | |
Platform.runLater { | |
def stage3 = new Stage() | |
stage3.initOwner(QuPathGUI.getInstance().getStage()) | |
stage3.setScene(new Scene( secondGridPane)) | |
stage3.setTitle("Loaded Classifier "+classFile.getText()) | |
stage3.setWidth(870); | |
stage3.setHeight(900); | |
//stage.setResizable(false); | |
stage3.show() | |
} | |
} | |
//end of the loaded classifier | |
startButton.setOnAction { | |
col = 0 | |
row = 0 | |
//Create an arraylist with the same number of entries as classes | |
CHANNELS = Float.parseFloat(classText.getText()) | |
//channelLabels = new ArrayList( CHANNELS) | |
def secondGridPane = new GridPane() | |
secondGridPane.setPadding(new Insets(10, 10, 10, 10)); | |
secondGridPane.setVgap(2); | |
secondGridPane.setHgap(10); | |
def assist = new Label("Short Class Names are recommended as dual positives and beyond use the full names of all positive classes.\n ") | |
secondGridPane.add(assist, col, row++, 5, 1) | |
def mChoice = new Label("Measurement ") | |
mChoice.setMaxWidth(400) | |
mChoice.setAlignment(Pos.CENTER_RIGHT) | |
def mLowThresh = new Label("Lower Threshold <= ") | |
def mHighThresh = new Label("<= Upper Threshold ") | |
def mClassName = new Label("Class Name ") | |
secondGridPane.add( mChoice, col++, row, 1,1) | |
secondGridPane.add( mLowThresh, col++, row, 1,1) | |
secondGridPane.add( mClassName, col++, row, 1,1) | |
secondGridPane.add( mHighThresh, col, row++, 1,1) | |
//create data structures to use for building the classifier | |
boxes = new ComboBox [CHANNELS] | |
lowerTs = new TextField [CHANNELS] | |
classList = new TextField [CHANNELS] | |
upperTs = new TextField [CHANNELS] | |
colorPickers = new ColorPicker [CHANNELS] | |
//create the dialog where the user will select the measurements of interest and values | |
for (def i=0; i<CHANNELS;i++) { | |
col =0 | |
//Add to dialog box, new row for each | |
boxes[i] = new ComboBox() | |
qupath.lib.classifiers.PathClassifierTools.getAvailableFeatures(getDetectionObjects()).each {boxes[i].getItems().add(it) } | |
classList[i] = new TextField("C" + (i+1)) | |
lowerTs[i] = new TextField("0") | |
upperTs[i] = new TextField(maxPixel.toString()) | |
classList[i].setMaxWidth( textFieldWidth); | |
classList[i].setAlignment(Pos.CENTER_RIGHT) | |
lowerTs[i].setMaxWidth( textFieldWidth); | |
lowerTs[i].setAlignment(Pos.CENTER_RIGHT) | |
upperTs[i].setMaxWidth( textFieldWidth); | |
upperTs[i].setAlignment(Pos.CENTER_RIGHT) | |
colorPickers[i] = new ColorPicker() | |
secondGridPane.add(boxes[i], col++, row, 1,1) | |
secondGridPane.add(lowerTs[i], col++, row, 1,1) | |
secondGridPane.add(classList[i], col++, row, 1, 1) | |
secondGridPane.add(upperTs[i], col++, row, 1,1) | |
secondGridPane.add(colorPickers[i], col++, row++, 1,1) | |
} | |
Button runButton = new Button() | |
runButton.setText("Run Classifier") | |
secondGridPane.add(runButton, 0, row++, 1, 1) | |
//All stuff for actually classifying cells | |
runButton.setOnAction { | |
//set up for classifier | |
def cells = getCellObjects() | |
cells.each {it.setPathClass(getPathClass('Negative'))} | |
//start classifier with all cells negative | |
for (def i=0; i<CHANNELS; i++){ | |
def lower = Float.parseFloat(lowerTs[i].getText()) | |
def upper = Float.parseFloat(upperTs[i].getText()) | |
//create lists for each measurement, classify cells based off of those measurements | |
positive[i] = cells.findAll {measurement(it, boxes[i].getValue()) >= lower && measurement(it, boxes[i].getValue()) <= upper} | |
positive[i].each {it.setPathClass(getPathClass(classList[i].getText()+' positive')); it.getMeasurementList().putMeasurement("ClassDepth", 1)} | |
c = colorPickers[i].getValue() | |
currentPathClass = getPathClass(classList[i].getText()+' positive') | |
//for some reason setColor needs to be used here instead of setColorRGB which applies to objects and not classes? | |
currentPathClass.setColor(ColorToolsFX.getRGB(c)) | |
} | |
//Call the classifier on each list of positive single class cells, except for the last one! | |
for (def i=0; i<(CHANNELS-1); i++){ | |
println("ROUND "+i) | |
int remaining = 0 | |
for (def j = i+1; j<CHANNELS; j++){ | |
remaining +=1 | |
} | |
//println("SENDING CELLS TO CLASSIFIER "+positive[i].size()) | |
depth = 2 | |
classifier(classList[i].getText(), positive[i], remaining, i) | |
} | |
//A desperate attempt to fix the possibility of class name mismatch between slides | |
Set classSet = [] | |
for (object in getCellObjects()) { | |
classSet << object.getPathClass() | |
} | |
List classyList = [] | |
classyList.addAll(classSet.findAll{it != getPathClass("Negative") }) | |
classyList.each{ | |
className = it.getName() | |
cells = getCellObjects().findAll{it.getPathClass() == getPathClass(className)} | |
//remove the " positive" | |
classNameList = className.tokenize(' ')[0] | |
classNameList = classNameList.tokenize(',') | |
classNameList.sort() | |
name = classNameList.join(',') | |
//print name | |
cells.each{it.setPathClass(getPathClass(name+" positive"))} | |
} | |
println("clasifier done") | |
fireHierarchyUpdate() | |
} | |
//end Run Button | |
////////////////////////// | |
row+=10 //spacer | |
Button saveButton = new Button() | |
saveButton.setText("Save Classifier") | |
secondGridPane.add(saveButton, 1, row, 1, 1) | |
def TextField saveFile = new TextField("MyClassifier"); | |
saveFile.setMaxWidth( textFieldWidth); | |
saveFile.setAlignment(Pos.CENTER_RIGHT) | |
secondGridPane.add( saveFile, 2, row++, 1, 1) | |
//All stuff for actually classifying cells | |
saveButton.setOnAction { | |
def export = [] | |
for (def l=0; l<CHANNELS;l++){ | |
export << [boxes[l].getValue(), lowerTs[l].getText(), classList[l].getText(), upperTs[l].getText(), colorPickers[l].getValue().toString()] | |
} | |
mkdirs(buildFilePath(PROJECT_BASE_DIR, "classifiers")) | |
path = buildFilePath(PROJECT_BASE_DIR, "classifiers",saveFile.getText()) | |
new File(path).withObjectOutputStream { | |
it.writeObject(export) | |
} | |
} | |
////////////////////// | |
//End of classifier window | |
Platform.runLater { | |
def stage2 = new Stage() | |
stage2.initOwner(QuPathGUI.getInstance().getStage()) | |
stage2.setScene(new Scene( secondGridPane)) | |
stage2.setTitle("Build Classifier ") | |
stage2.setWidth(870); | |
stage2.setHeight(900); | |
//stage.setResizable(false); | |
stage2.show() | |
} | |
} | |
//Some stuff that controls the dialog box showing up. I don't really understand it but it is needed. | |
Platform.runLater { | |
def stage = new Stage() | |
stage.initOwner(QuPathGUI.getInstance().getStage()) | |
stage.setScene(new Scene( gridPane)) | |
stage.setTitle("Simple Classifier for Multiple Classes ") | |
stage.setWidth(550); | |
stage.setHeight(300); | |
//stage.setResizable(false); | |
stage.show() | |
} | |
//Recursive function to keep track of what needs to be classified next. | |
//listAName is the current classifier name (for example Class 1 during the first pass) which gets modified with the intersect | |
//and would result in cells from this pass being called Class 1,Class2 positive. | |
//listA is the current list of cells being checked for intersection with the first member of... | |
//remainingListSize is the number of lists in "positive[]" that the current list needs to be checked against | |
//position keeps track of the starting position of listAName class. So on the first runthrough everything will start with C1 | |
//The next runthrough will start with position 2 since the base class will be C2 | |
void classifier (listAName, listA, remainingListSize, position = 0){ | |
//println("listofLists " +remainingListSize) | |
//println("base list size"+listA.size()) | |
for (def y=0; y <remainingListSize; y++){ | |
//println("listofLists in loop" +remainingListSize) | |
//println("y "+y) | |
//println("depth"+depth) | |
k = (position+y+1).intValue() | |
//println("k "+k) | |
// get the measurements needed to determine if a cell is a member of the next class (from listOfLists) | |
def lower = Float.parseFloat(lowerTs[k].getText()) | |
def upper = Float.parseFloat(upperTs[k].getText()) | |
//intersect the listA with the first of the listOfLists | |
//on the first run, this would take all of Class 1, and compare it with measurements that determine Class 2, resulting in a subset of | |
//Class 1 that meet both criteria | |
def passList = listA.findAll {measurement(it, boxes[k].getValue()) >= lower && measurement(it, boxes[k].getValue()) <= upper} | |
newName = classList[k].getText() | |
//Create a new name based off of the current name and the newly compared class | |
// on the first runthrough this would give "Class 1,Class 2 positive" | |
def mergeName = listAName+","+newName | |
//println("depth "+depth) | |
//println(mergeName+" with number of remaining lists "+remainingListSize) | |
passList.each{ | |
//Check if class being applies is "shorter" than the current class. | |
//This prevents something like "C2,C3" from overwriting "C1,C2,C3,C4" from the first call. | |
if (it.getMeasurementList().getMeasurementValue("ClassDepth") < depth) { | |
it.setPathClass(getPathClass(mergeName+' positive')); | |
it.getMeasurementList().putMeasurement("ClassDepth", depth) | |
} | |
} | |
if (k == (positive.size()-1)){ | |
//If we are comparing the current list to the last positive class list, we are done | |
//Go up one level of classifier depth and return | |
depth -=1 | |
return; | |
} else{ | |
//Otherwise, move one place further along the "positive" list of base classes, and increase depth | |
//This happens when going from C1,C2 to C1,C2,C3 etc. | |
def passAlong = remainingListSize-1 | |
//println("passAlong "+passAlong.size()) | |
//println("name for next " +mergeName) | |
depth +=1 | |
classifier(mergeName, passList, passAlong, k) | |
} | |
//println("loopy depth"+depth) | |
} | |
} |
//V1 Edited slightly to work for tiles/SLICs | |
import javafx.application.Platform | |
import javafx.beans.property.SimpleLongProperty | |
import javafx.geometry.Insets | |
import javafx.scene.Scene | |
import javafx.geometry.Pos | |
import javafx.scene.control.Button | |
import javafx.scene.control.Label | |
import javafx.scene.control.TableView | |
import javafx.scene.control.TextField | |
import javafx.scene.control.CheckBox | |
import javafx.scene.control.ComboBox | |
import javafx.scene.control.TableColumn | |
import javafx.scene.control.ColorPicker | |
import javafx.scene.layout.BorderPane | |
import javafx.scene.layout.GridPane | |
import javafx.scene.control.Tooltip | |
import javafx.stage.Stage | |
import qupath.lib.gui.QuPathGUI | |
import qupath.lib.gui.helpers.ColorToolsFX; | |
import javafx.scene.paint.Color; | |
//Settings to control the dialog boxes for the GUI | |
int col = 0 | |
int row = 0 | |
int textFieldWidth = 120 | |
int labelWidth = 150 | |
def gridPane = new GridPane() | |
gridPane.setPadding(new Insets(10, 10, 10, 10)); | |
gridPane.setVgap(2); | |
gridPane.setHgap(10); | |
def server = getCurrentImageData().getServer() | |
//Upper thresholds will default to the max bit depth, since that is likely the most common upper limit for a given image. | |
maxPixel = Math.pow((double) 2,(double)server.getBitsPerPixel())-1 | |
positive = [] | |
//print(maxPixel) | |
def titleLabel = new Label("Intended for use where one marker determines a base class.\nFor example, you could use Channel 1 Cytoplasmic Mean and Channel 2 Nuclear Mean\nto generate two base classes and a Double positive class where each condition is true.\n\n") | |
gridPane.add(titleLabel,col, row++, 3, 1) | |
def requestLabel = new Label("How many base classes/single measurements are you interested in?\nThe above example would have two.\n") | |
gridPane.add(requestLabel,col, row++, 3, 1) | |
def TextField classText = new TextField("2"); | |
classText.setMaxWidth( textFieldWidth); | |
classText.setAlignment(Pos.CENTER_RIGHT) | |
gridPane.add(classText, col++, row, 1, 1) | |
//ArrayList<Label> channelLabels | |
Button startButton = new Button() | |
startButton.setText("Start Classifying") | |
gridPane.add(startButton, col, row++, 1, 1) | |
startButton.setTooltip(new Tooltip("If you need to change the number of classes, re-run the script")); | |
col = 0 | |
row+=10 //spacer | |
def loadLabel = new Label("Load a classifier:") | |
gridPane.add(loadLabel,col++, row, 2, 1) | |
def TextField classFile = new TextField("MyClassifier"); | |
classFile.setMaxWidth( textFieldWidth); | |
classFile.setAlignment(Pos.CENTER_RIGHT) | |
gridPane.add( classFile, col++, row, 1, 1) | |
Button loadButton = new Button() | |
loadButton.setText("Load Classifier") | |
gridPane.add(loadButton, col++, row++, 1, 1) | |
//incredibly lazy and sloppy coding, just a copy and paste taking slightly different inputs | |
loadButton.setOnAction{ | |
path = buildFilePath(PROJECT_BASE_DIR, "classifiers",classFile.getText()) | |
new File(path).withObjectInputStream { | |
cObj = it.readObject() | |
} | |
//Create an arraylist with the same number of entries as classes | |
CHANNELS = cObj.size() | |
col = 0 | |
row = 0 | |
def secondGridPane = new GridPane() | |
secondGridPane.setPadding(new Insets(10, 10, 10, 10)); | |
secondGridPane.setVgap(2); | |
secondGridPane.setHgap(10); | |
def assist = new Label("Short Class Names are recommended as dual positives and beyond use the full names of all positive classes.\n ") | |
secondGridPane.add(assist, col, row++, 5, 1) | |
def mChoice = new Label("Measurement ") | |
mChoice.setMaxWidth(400) | |
mChoice.setAlignment(Pos.CENTER_RIGHT) | |
def mLowThresh = new Label("Lower Threshold <= ") | |
def mHighThresh = new Label("<= Upper Threshold ") | |
def mClassName = new Label("Class Name ") | |
secondGridPane.add( mChoice, col++, row, 1,1) | |
secondGridPane.add( mLowThresh, col++, row, 1,1) | |
secondGridPane.add( mClassName, col++, row, 1,1) | |
secondGridPane.add( mHighThresh, col, row++, 1,1) | |
//create data structures to use for building the classifier | |
boxes = new ComboBox [CHANNELS] | |
lowerTs = new TextField [CHANNELS] | |
classList = new TextField [CHANNELS] | |
upperTs = new TextField [CHANNELS] | |
colorPickers = new ColorPicker [CHANNELS] | |
//create the dialog where the user will select the measurements of interest and values | |
for (def i=0; i<CHANNELS;i++) { | |
col =0 | |
//Add to dialog box, new row for each | |
boxes[i] = new ComboBox() | |
qupath.lib.classifiers.PathClassificationLabellingHelper.getAvailableFeatures(getDetectionObjects()).each {boxes[i].getItems().add(it) } | |
boxes[i].setValue(cObj[i][0]) | |
classList[i] = new TextField(cObj[i][2]) | |
lowerTs[i] = new TextField(cObj[i][1]) | |
upperTs[i] = new TextField(cObj[i][3]) | |
classList[i].setMaxWidth( textFieldWidth); | |
classList[i].setAlignment(Pos.CENTER_RIGHT) | |
lowerTs[i].setMaxWidth( textFieldWidth); | |
lowerTs[i].setAlignment(Pos.CENTER_RIGHT) | |
upperTs[i].setMaxWidth( textFieldWidth); | |
upperTs[i].setAlignment(Pos.CENTER_RIGHT) | |
colorPickers[i] = new ColorPicker(Color.web(cObj[i][4])) | |
secondGridPane.add(boxes[i], col++, row, 1,1) | |
secondGridPane.add(lowerTs[i], col++, row, 1,1) | |
secondGridPane.add(classList[i], col++, row, 1, 1) | |
secondGridPane.add(upperTs[i], col++, row, 1,1) | |
secondGridPane.add(colorPickers[i], col++, row++, 1,1) | |
} | |
Button runButton = new Button() | |
runButton.setText("Run Classifier") | |
secondGridPane.add(runButton, 0, row++, 1, 1) | |
//All stuff for actually classifying cells | |
runButton.setOnAction { | |
//set up for classifier | |
def cells = getDetectionObjects() | |
cells.each {it.setPathClass(getPathClass('Negative'))} | |
startTime = System.currentTimeMillis() | |
//start classifier with all cells negative | |
for (def i=0; i<CHANNELS; i++){ | |
def lower = Float.parseFloat(lowerTs[i].getText()) | |
def upper = Float.parseFloat(upperTs[i].getText()) | |
//create lists for each measurement, classify cells based off of those measurements | |
positive[i] = cells.findAll {measurement(it, boxes[i].getValue()) >= lower && measurement(it, boxes[i].getValue()) <= upper} | |
positive[i].each {it.setPathClass(getPathClass(classList[i].getText()+' positive')); it.getMeasurementList().putMeasurement("ClassDepth", 1)} | |
c = colorPickers[i].getValue() | |
currentPathClass = getPathClass(classList[i].getText()+' positive') | |
//for some reason setColor needs to be used here instead of setColorRGB which applies to objects and not classes? | |
currentPathClass.setColor(ColorToolsFX.getRGBA(c)) | |
} | |
//Call the classifier on each list of positive single class cells, except for the last one! | |
for (def i=0; i<(CHANNELS-1); i++){ | |
println("ROUND "+i) | |
int remaining = 0 | |
for (def j = i+1; j<CHANNELS; j++){ | |
remaining +=1 | |
} | |
//println("SENDING CELLS TO CLASSIFIER "+positive[i].size()) | |
depth = 2 | |
classifier(classList[i].getText(), positive[i], remaining, i) | |
} | |
println("clasifier done") | |
fireHierarchyUpdate() | |
} | |
//end Run Button | |
row+=10 //spacer | |
Button saveButton = new Button() | |
saveButton.setText("Save Classifier") | |
secondGridPane.add(saveButton, 1, row, 1, 1) | |
def TextField saveFile = new TextField("MyClassifier"); | |
saveFile.setMaxWidth( textFieldWidth); | |
saveFile.setAlignment(Pos.CENTER_RIGHT) | |
secondGridPane.add( saveFile, 2, row++, 1, 1) | |
//All stuff for actually classifying cells | |
saveButton.setOnAction { | |
def export = [] | |
for (def l=0; l<CHANNELS;l++){ | |
export << [boxes[l].getValue(), lowerTs[l].getText(), classList[l].getText(), upperTs[l].getText(), colorPickers[l].getValue().toString()] | |
} | |
mkdirs(buildFilePath(PROJECT_BASE_DIR, "classifiers")) | |
path = buildFilePath(PROJECT_BASE_DIR, "classifiers",saveFile.getText()) | |
new File(path).withObjectOutputStream { | |
it.writeObject(export) | |
} | |
} | |
//End of classifier window | |
Platform.runLater { | |
def stage3 = new Stage() | |
stage3.initOwner(QuPathGUI.getInstance().getStage()) | |
stage3.setScene(new Scene( secondGridPane)) | |
stage3.setTitle("Loaded Classifier "+classFile.getText()) | |
stage3.setWidth(870); | |
stage3.setHeight(900); | |
//stage.setResizable(false); | |
stage3.show() | |
} | |
} | |
//end of the loaded classifier | |
startButton.setOnAction { | |
col = 0 | |
row = 0 | |
//Create an arraylist with the same number of entries as classes | |
CHANNELS = Float.parseFloat(classText.getText()) | |
//channelLabels = new ArrayList( CHANNELS) | |
def secondGridPane = new GridPane() | |
secondGridPane.setPadding(new Insets(10, 10, 10, 10)); | |
secondGridPane.setVgap(2); | |
secondGridPane.setHgap(10); | |
def assist = new Label("Short Class Names are recommended as dual positives and beyond use the full names of all positive classes.\n ") | |
secondGridPane.add(assist, col, row++, 5, 1) | |
def mChoice = new Label("Measurement ") | |
mChoice.setMaxWidth(400) | |
mChoice.setAlignment(Pos.CENTER_RIGHT) | |
def mLowThresh = new Label("Lower Threshold <= ") | |
def mHighThresh = new Label("<= Upper Threshold ") | |
def mClassName = new Label("Class Name ") | |
secondGridPane.add( mChoice, col++, row, 1,1) | |
secondGridPane.add( mLowThresh, col++, row, 1,1) | |
secondGridPane.add( mClassName, col++, row, 1,1) | |
secondGridPane.add( mHighThresh, col, row++, 1,1) | |
//create data structures to use for building the classifier | |
boxes = new ComboBox [CHANNELS] | |
lowerTs = new TextField [CHANNELS] | |
classList = new TextField [CHANNELS] | |
upperTs = new TextField [CHANNELS] | |
colorPickers = new ColorPicker [CHANNELS] | |
//create the dialog where the user will select the measurements of interest and values | |
for (def i=0; i<CHANNELS;i++) { | |
col =0 | |
//Add to dialog box, new row for each | |
boxes[i] = new ComboBox() | |
qupath.lib.classifiers.PathClassificationLabellingHelper.getAvailableFeatures(getDetectionObjects()).each {boxes[i].getItems().add(it) } | |
classList[i] = new TextField("C" + (i+1)) | |
lowerTs[i] = new TextField("0") | |
upperTs[i] = new TextField(maxPixel.toString()) | |
classList[i].setMaxWidth( textFieldWidth); | |
classList[i].setAlignment(Pos.CENTER_RIGHT) | |
lowerTs[i].setMaxWidth( textFieldWidth); | |
lowerTs[i].setAlignment(Pos.CENTER_RIGHT) | |
upperTs[i].setMaxWidth( textFieldWidth); | |
upperTs[i].setAlignment(Pos.CENTER_RIGHT) | |
colorPickers[i] = new ColorPicker() | |
secondGridPane.add(boxes[i], col++, row, 1,1) | |
secondGridPane.add(lowerTs[i], col++, row, 1,1) | |
secondGridPane.add(classList[i], col++, row, 1, 1) | |
secondGridPane.add(upperTs[i], col++, row, 1,1) | |
secondGridPane.add(colorPickers[i], col++, row++, 1,1) | |
} | |
Button runButton = new Button() | |
runButton.setText("Run Classifier") | |
secondGridPane.add(runButton, 0, row++, 1, 1) | |
//All stuff for actually classifying cells | |
runButton.setOnAction { | |
//set up for classifier | |
def cells = getDetectionObjects() | |
cells.each {it.setPathClass(getPathClass('Negative'))} | |
//start classifier with all cells negative | |
for (def i=0; i<CHANNELS; i++){ | |
def lower = Float.parseFloat(lowerTs[i].getText()) | |
def upper = Float.parseFloat(upperTs[i].getText()) | |
//create lists for each measurement, classify cells based off of those measurements | |
positive[i] = cells.findAll {measurement(it, boxes[i].getValue()) >= lower && measurement(it, boxes[i].getValue()) <= upper} | |
positive[i].each {it.setPathClass(getPathClass(classList[i].getText()+' positive')); it.getMeasurementList().putMeasurement("ClassDepth", 1)} | |
c = colorPickers[i].getValue() | |
currentPathClass = getPathClass(classList[i].getText()+' positive') | |
//for some reason setColor needs to be used here instead of setColorRGB which applies to objects and not classes? | |
currentPathClass.setColor(ColorToolsFX.getRGBA(c)) | |
} | |
//Call the classifier on each list of positive single class cells, except for the last one! | |
for (def i=0; i<(CHANNELS-1); i++){ | |
println("ROUND "+i) | |
int remaining = 0 | |
for (def j = i+1; j<CHANNELS; j++){ | |
remaining +=1 | |
} | |
//println("SENDING CELLS TO CLASSIFIER "+positive[i].size()) | |
depth = 2 | |
classifier(classList[i].getText(), positive[i], remaining, i) | |
} | |
println("clasifier done") | |
fireHierarchyUpdate() | |
} | |
//end Run Button | |
////////////////////////// | |
row+=10 //spacer | |
Button saveButton = new Button() | |
saveButton.setText("Save Classifier") | |
secondGridPane.add(saveButton, 1, row, 1, 1) | |
def TextField saveFile = new TextField("MyClassifier"); | |
saveFile.setMaxWidth( textFieldWidth); | |
saveFile.setAlignment(Pos.CENTER_RIGHT) | |
secondGridPane.add( saveFile, 2, row++, 1, 1) | |
//All stuff for actually classifying cells | |
saveButton.setOnAction { | |
def export = [] | |
for (def l=0; l<CHANNELS;l++){ | |
export << [boxes[l].getValue(), lowerTs[l].getText(), classList[l].getText(), upperTs[l].getText(), colorPickers[l].getValue().toString()] | |
} | |
mkdirs(buildFilePath(PROJECT_BASE_DIR, "classifiers")) | |
path = buildFilePath(PROJECT_BASE_DIR, "classifiers",saveFile.getText()) | |
new File(path).withObjectOutputStream { | |
it.writeObject(export) | |
} | |
} | |
////////////////////// | |
//End of classifier window | |
Platform.runLater { | |
def stage2 = new Stage() | |
stage2.initOwner(QuPathGUI.getInstance().getStage()) | |
stage2.setScene(new Scene( secondGridPane)) | |
stage2.setTitle("Build Classifier ") | |
stage2.setWidth(870); | |
stage2.setHeight(900); | |
//stage.setResizable(false); | |
stage2.show() | |
} | |
} | |
//Some stuff that controls the dialog box showing up. I don't really understand it but it is needed. | |
Platform.runLater { | |
def stage = new Stage() | |
stage.initOwner(QuPathGUI.getInstance().getStage()) | |
stage.setScene(new Scene( gridPane)) | |
stage.setTitle("Simple Classifier for Multiple Classes ") | |
stage.setWidth(550); | |
stage.setHeight(300); | |
//stage.setResizable(false); | |
stage.show() | |
} | |
//Recursive function to keep track of what needs to be classified next. | |
//listAName is the current classifier name (for example Class 1 during the first pass) which gets modified with the intersect | |
//and would result in cells from this pass being called Class 1,Class2 positive. | |
//listA is the current list of cells being checked for intersection with the first member of... | |
//remainingListSize is the number of lists in "positive[]" that the current list needs to be checked against | |
//position keeps track of the starting position of listAName class. So on the first runthrough everything will start with C1 | |
//The next runthrough will start with position 2 since the base class will be C2 | |
void classifier (listAName, listA, remainingListSize, position = 0){ | |
//println("listofLists " +remainingListSize) | |
//println("base list size"+listA.size()) | |
for (def y=0; y <remainingListSize; y++){ | |
//println("listofLists in loop" +remainingListSize) | |
//println("y "+y) | |
//println("depth"+depth) | |
k = (position+y+1).intValue() | |
//println("k "+k) | |
// get the measurements needed to determine if a cell is a member of the next class (from listOfLists) | |
def lower = Float.parseFloat(lowerTs[k].getText()) | |
def upper = Float.parseFloat(upperTs[k].getText()) | |
//intersect the listA with the first of the listOfLists | |
//on the first run, this would take all of Class 1, and compare it with measurements that determine Class 2, resulting in a subset of | |
//Class 1 that meet both criteria | |
def passList = listA.findAll {measurement(it, boxes[k].getValue()) >= lower && measurement(it, boxes[k].getValue()) <= upper} | |
newName = classList[k].getText() | |
//Create a new name based off of the current name and the newly compared class | |
// on the first runthrough this would give "Class 1,Class 2 positive" | |
def mergeName = listAName+","+newName | |
//println("depth "+depth) | |
//println(mergeName+" with number of remaining lists "+remainingListSize) | |
passList.each{ | |
//Check if class being applies is "shorter" than the current class. | |
//This prevents something like "C2,C3" from overwriting "C1,C2,C3,C4" from the first call. | |
if (it.getMeasurementList().getMeasurementValue("ClassDepth") < depth) { | |
it.setPathClass(getPathClass(mergeName+' positive')); | |
it.getMeasurementList().putMeasurement("ClassDepth", depth) | |
} | |
} | |
if (k == (positive.size()-1)){ | |
//If we are comparing the current list to the last positive class list, we are done | |
//Go up one level of classifier depth and return | |
depth -=1 | |
return; | |
} else{ | |
//Otherwise, move one place further along the "positive" list of base classes, and increase depth | |
//This happens when going from C1,C2 to C1,C2,C3 etc. | |
def passAlong = remainingListSize-1 | |
//println("passAlong "+passAlong.size()) | |
//println("name for next " +mergeName) | |
depth +=1 | |
classifier(mergeName, passList, passAlong, k) | |
} | |
//println("loopy depth"+depth) | |
} | |
} |
//V3 Corrected classification over-write error on classifiers with more than 3 parts | |
import javafx.application.Platform | |
import javafx.beans.property.SimpleLongProperty | |
import javafx.geometry.Insets | |
import javafx.scene.Scene | |
import javafx.geometry.Pos | |
import javafx.scene.control.Button | |
import javafx.scene.control.Label | |
import javafx.scene.control.TableView | |
import javafx.scene.control.TextField | |
import javafx.scene.control.CheckBox | |
import javafx.scene.control.ComboBox | |
import javafx.scene.control.TableColumn | |
import javafx.scene.control.ColorPicker | |
import javafx.scene.layout.BorderPane | |
import javafx.scene.layout.GridPane | |
import javafx.scene.control.Tooltip | |
import javafx.stage.Stage | |
import qupath.lib.gui.QuPathGUI | |
import qupath.lib.gui.helpers.ColorToolsFX; | |
import javafx.scene.paint.Color; | |
//Settings to control the dialog boxes for the GUI | |
int col = 0 | |
int row = 0 | |
int textFieldWidth = 120 | |
int labelWidth = 150 | |
def gridPane = new GridPane() | |
gridPane.setPadding(new Insets(10, 10, 10, 10)); | |
gridPane.setVgap(2); | |
gridPane.setHgap(10); | |
def server = getCurrentImageData().getServer() | |
//Upper thresholds will default to the max bit depth, since that is likely the most common upper limit for a given image. | |
maxPixel = Math.pow((double) 2,(double)server.getBitsPerPixel())-1 | |
positive = [] | |
//print(maxPixel) | |
def titleLabel = new Label("Intended for use where one marker determines a base class.\nFor example, you could use Channel 1 Cytoplasmic Mean and Channel 2 Nuclear Mean\nto generate two base classes and a Double positive class where each condition is true.\n\n") | |
gridPane.add(titleLabel,col, row++, 3, 1) | |
def requestLabel = new Label("How many base classes/single measurements are you interested in?\nThe above example would have two.\n") | |
gridPane.add(requestLabel,col, row++, 3, 1) | |
def TextField classText = new TextField("2"); | |
classText.setMaxWidth( textFieldWidth); | |
classText.setAlignment(Pos.CENTER_RIGHT) | |
gridPane.add(classText, col++, row, 1, 1) | |
//ArrayList<Label> channelLabels | |
Button startButton = new Button() | |
startButton.setText("Start Classifying") | |
gridPane.add(startButton, col, row++, 1, 1) | |
startButton.setTooltip(new Tooltip("If you need to change the number of classes, re-run the script")); | |
col = 0 | |
row+=10 //spacer | |
def loadLabel = new Label("Load a classifier:") | |
gridPane.add(loadLabel,col++, row, 2, 1) | |
def TextField classFile = new TextField("MyClassifier"); | |
classFile.setMaxWidth( textFieldWidth); | |
classFile.setAlignment(Pos.CENTER_RIGHT) | |
gridPane.add( classFile, col++, row, 1, 1) | |
Button loadButton = new Button() | |
loadButton.setText("Load Classifier") | |
gridPane.add(loadButton, col++, row++, 1, 1) | |
//incredibly lazy and sloppy coding, just a copy and paste taking slightly different inputs | |
loadButton.setOnAction{ | |
path = buildFilePath(PROJECT_BASE_DIR, "classifiers",classFile.getText()) | |
new File(path).withObjectInputStream { | |
cObj = it.readObject() | |
} | |
//Create an arraylist with the same number of entries as classes | |
CHANNELS = cObj.size() | |
col = 0 | |
row = 0 | |
def secondGridPane = new GridPane() | |
secondGridPane.setPadding(new Insets(10, 10, 10, 10)); | |
secondGridPane.setVgap(2); | |
secondGridPane.setHgap(10); | |
def assist = new Label("Short Class Names are recommended as dual positives and beyond use the full names of all positive classes.\n ") | |
secondGridPane.add(assist, col, row++, 5, 1) | |
def mChoice = new Label("Measurement ") | |
mChoice.setMaxWidth(400) | |
mChoice.setAlignment(Pos.CENTER_RIGHT) | |
def mLowThresh = new Label("Lower Threshold <= ") | |
def mHighThresh = new Label("<= Upper Threshold ") | |
def mClassName = new Label("Class Name ") | |
secondGridPane.add( mChoice, col++, row, 1,1) | |
secondGridPane.add( mLowThresh, col++, row, 1,1) | |
secondGridPane.add( mClassName, col++, row, 1,1) | |
secondGridPane.add( mHighThresh, col, row++, 1,1) | |
//create data structures to use for building the classifier | |
boxes = new ComboBox [CHANNELS] | |
lowerTs = new TextField [CHANNELS] | |
classList = new TextField [CHANNELS] | |
upperTs = new TextField [CHANNELS] | |
colorPickers = new ColorPicker [CHANNELS] | |
//create the dialog where the user will select the measurements of interest and values | |
for (def i=0; i<CHANNELS;i++) { | |
col =0 | |
//Add to dialog box, new row for each | |
boxes[i] = new ComboBox() | |
qupath.lib.classifiers.PathClassificationLabellingHelper.getAvailableFeatures(getDetectionObjects()).each {boxes[i].getItems().add(it) } | |
boxes[i].setValue(cObj[i][0]) | |
classList[i] = new TextField(cObj[i][2]) | |
lowerTs[i] = new TextField(cObj[i][1]) | |
upperTs[i] = new TextField(cObj[i][3]) | |
classList[i].setMaxWidth( textFieldWidth); | |
classList[i].setAlignment(Pos.CENTER_RIGHT) | |
lowerTs[i].setMaxWidth( textFieldWidth); | |
lowerTs[i].setAlignment(Pos.CENTER_RIGHT) | |
upperTs[i].setMaxWidth( textFieldWidth); | |
upperTs[i].setAlignment(Pos.CENTER_RIGHT) | |
colorPickers[i] = new ColorPicker(Color.web(cObj[i][4])) | |
secondGridPane.add(boxes[i], col++, row, 1,1) | |
secondGridPane.add(lowerTs[i], col++, row, 1,1) | |
secondGridPane.add(classList[i], col++, row, 1, 1) | |
secondGridPane.add(upperTs[i], col++, row, 1,1) | |
secondGridPane.add(colorPickers[i], col++, row++, 1,1) | |
} | |
Button runButton = new Button() | |
runButton.setText("Run Classifier") | |
secondGridPane.add(runButton, 0, row++, 1, 1) | |
//All stuff for actually classifying cells | |
runButton.setOnAction { | |
//set up for classifier | |
def cells = getCellObjects() | |
cells.each {it.setPathClass(getPathClass('Negative'))} | |
startTime = System.currentTimeMillis() | |
//start classifier with all cells negative | |
for (def i=0; i<CHANNELS; i++){ | |
def lower = Float.parseFloat(lowerTs[i].getText()) | |
def upper = Float.parseFloat(upperTs[i].getText()) | |
//create lists for each measurement, classify cells based off of those measurements | |
positive[i] = cells.findAll {measurement(it, boxes[i].getValue()) >= lower && measurement(it, boxes[i].getValue()) <= upper} | |
positive[i].each {it.setPathClass(getPathClass(classList[i].getText()+' positive')); it.getMeasurementList().putMeasurement("ClassDepth", 1)} | |
c = colorPickers[i].getValue() | |
currentPathClass = getPathClass(classList[i].getText()+' positive') | |
//for some reason setColor needs to be used here instead of setColorRGB which applies to objects and not classes? | |
currentPathClass.setColor(ColorToolsFX.getRGBA(c)) | |
} | |
//Call the classifier on each list of positive single class cells, except for the last one! | |
for (def i=0; i<(CHANNELS-1); i++){ | |
println("ROUND "+i) | |
int remaining = 0 | |
for (def j = i+1; j<CHANNELS; j++){ | |
remaining +=1 | |
} | |
//println("SENDING CELLS TO CLASSIFIER "+positive[i].size()) | |
depth = 2 | |
classifier(classList[i].getText(), positive[i], remaining, i) | |
} | |
println("clasifier done") | |
fireHierarchyUpdate() | |
} | |
//end Run Button | |
row+=10 //spacer | |
Button saveButton = new Button() | |
saveButton.setText("Save Classifier") | |
secondGridPane.add(saveButton, 1, row, 1, 1) | |
def TextField saveFile = new TextField("MyClassifier"); | |
saveFile.setMaxWidth( textFieldWidth); | |
saveFile.setAlignment(Pos.CENTER_RIGHT) | |
secondGridPane.add( saveFile, 2, row++, 1, 1) | |
//All stuff for actually classifying cells | |
saveButton.setOnAction { | |
def export = [] | |
for (def l=0; l<CHANNELS;l++){ | |
export << [boxes[l].getValue(), lowerTs[l].getText(), classList[l].getText(), upperTs[l].getText(), colorPickers[l].getValue().toString()] | |
} | |
mkdirs(buildFilePath(PROJECT_BASE_DIR, "classifiers")) | |
path = buildFilePath(PROJECT_BASE_DIR, "classifiers",saveFile.getText()) | |
new File(path).withObjectOutputStream { | |
it.writeObject(export) | |
} | |
} | |
//End of classifier window | |
Platform.runLater { | |
def stage3 = new Stage() | |
stage3.initOwner(QuPathGUI.getInstance().getStage()) | |
stage3.setScene(new Scene( secondGridPane)) | |
stage3.setTitle("Loaded Classifier "+classFile.getText()) | |
stage3.setWidth(870); | |
stage3.setHeight(900); | |
//stage.setResizable(false); | |
stage3.show() | |
} | |
} | |
//end of the loaded classifier | |
startButton.setOnAction { | |
col = 0 | |
row = 0 | |
//Create an arraylist with the same number of entries as classes | |
CHANNELS = Float.parseFloat(classText.getText()) | |
//channelLabels = new ArrayList( CHANNELS) | |
def secondGridPane = new GridPane() | |
secondGridPane.setPadding(new Insets(10, 10, 10, 10)); | |
secondGridPane.setVgap(2); | |
secondGridPane.setHgap(10); | |
def assist = new Label("Short Class Names are recommended as dual positives and beyond use the full names of all positive classes.\n ") | |
secondGridPane.add(assist, col, row++, 5, 1) | |
def mChoice = new Label("Measurement ") | |
mChoice.setMaxWidth(400) | |
mChoice.setAlignment(Pos.CENTER_RIGHT) | |
def mLowThresh = new Label("Lower Threshold <= ") | |
def mHighThresh = new Label("<= Upper Threshold ") | |
def mClassName = new Label("Class Name ") | |
secondGridPane.add( mChoice, col++, row, 1,1) | |
secondGridPane.add( mLowThresh, col++, row, 1,1) | |
secondGridPane.add( mClassName, col++, row, 1,1) | |
secondGridPane.add( mHighThresh, col, row++, 1,1) | |
//create data structures to use for building the classifier | |
boxes = new ComboBox [CHANNELS] | |
lowerTs = new TextField [CHANNELS] | |
classList = new TextField [CHANNELS] | |
upperTs = new TextField [CHANNELS] | |
colorPickers = new ColorPicker [CHANNELS] | |
//create the dialog where the user will select the measurements of interest and values | |
for (def i=0; i<CHANNELS;i++) { | |
col =0 | |
//Add to dialog box, new row for each | |
boxes[i] = new ComboBox() | |
qupath.lib.classifiers.PathClassificationLabellingHelper.getAvailableFeatures(getDetectionObjects()).each {boxes[i].getItems().add(it) } | |
classList[i] = new TextField("C" + (i+1)) | |
lowerTs[i] = new TextField("0") | |
upperTs[i] = new TextField(maxPixel.toString()) | |
classList[i].setMaxWidth( textFieldWidth); | |
classList[i].setAlignment(Pos.CENTER_RIGHT) | |
lowerTs[i].setMaxWidth( textFieldWidth); | |
lowerTs[i].setAlignment(Pos.CENTER_RIGHT) | |
upperTs[i].setMaxWidth( textFieldWidth); | |
upperTs[i].setAlignment(Pos.CENTER_RIGHT) | |
colorPickers[i] = new ColorPicker() | |
secondGridPane.add(boxes[i], col++, row, 1,1) | |
secondGridPane.add(lowerTs[i], col++, row, 1,1) | |
secondGridPane.add(classList[i], col++, row, 1, 1) | |
secondGridPane.add(upperTs[i], col++, row, 1,1) | |
secondGridPane.add(colorPickers[i], col++, row++, 1,1) | |
} | |
Button runButton = new Button() | |
runButton.setText("Run Classifier") | |
secondGridPane.add(runButton, 0, row++, 1, 1) | |
//All stuff for actually classifying cells | |
runButton.setOnAction { | |
//set up for classifier | |
def cells = getCellObjects() | |
cells.each {it.setPathClass(getPathClass('Negative'))} | |
//start classifier with all cells negative | |
for (def i=0; i<CHANNELS; i++){ | |
def lower = Float.parseFloat(lowerTs[i].getText()) | |
def upper = Float.parseFloat(upperTs[i].getText()) | |
//create lists for each measurement, classify cells based off of those measurements | |
positive[i] = cells.findAll {measurement(it, boxes[i].getValue()) >= lower && measurement(it, boxes[i].getValue()) <= upper} | |
positive[i].each {it.setPathClass(getPathClass(classList[i].getText()+' positive')); it.getMeasurementList().putMeasurement("ClassDepth", 1)} | |
c = colorPickers[i].getValue() | |
currentPathClass = getPathClass(classList[i].getText()+' positive') | |
//for some reason setColor needs to be used here instead of setColorRGB which applies to objects and not classes? | |
currentPathClass.setColor(ColorToolsFX.getRGBA(c)) | |
} | |
//Call the classifier on each list of positive single class cells, except for the last one! | |
for (def i=0; i<(CHANNELS-1); i++){ | |
println("ROUND "+i) | |
int remaining = 0 | |
for (def j = i+1; j<CHANNELS; j++){ | |
remaining +=1 | |
} | |
//println("SENDING CELLS TO CLASSIFIER "+positive[i].size()) | |
depth = 2 | |
classifier(classList[i].getText(), positive[i], remaining, i) | |
} | |
println("clasifier done") | |
fireHierarchyUpdate() | |
} | |
//end Run Button | |
////////////////////////// | |
row+=10 //spacer | |
Button saveButton = new Button() | |
saveButton.setText("Save Classifier") | |
secondGridPane.add(saveButton, 1, row, 1, 1) | |
def TextField saveFile = new TextField("MyClassifier"); | |
saveFile.setMaxWidth( textFieldWidth); | |
saveFile.setAlignment(Pos.CENTER_RIGHT) | |
secondGridPane.add( saveFile, 2, row++, 1, 1) | |
//All stuff for actually classifying cells | |
saveButton.setOnAction { | |
def export = [] | |
for (def l=0; l<CHANNELS;l++){ | |
export << [boxes[l].getValue(), lowerTs[l].getText(), classList[l].getText(), upperTs[l].getText(), colorPickers[l].getValue().toString()] | |
} | |
mkdirs(buildFilePath(PROJECT_BASE_DIR, "classifiers")) | |
path = buildFilePath(PROJECT_BASE_DIR, "classifiers",saveFile.getText()) | |
new File(path).withObjectOutputStream { | |
it.writeObject(export) | |
} | |
} | |
////////////////////// | |
//End of classifier window | |
Platform.runLater { | |
def stage2 = new Stage() | |
stage2.initOwner(QuPathGUI.getInstance().getStage()) | |
stage2.setScene(new Scene( secondGridPane)) | |
stage2.setTitle("Build Classifier ") | |
stage2.setWidth(870); | |
stage2.setHeight(900); | |
//stage.setResizable(false); | |
stage2.show() | |
} | |
} | |
//Some stuff that controls the dialog box showing up. I don't really understand it but it is needed. | |
Platform.runLater { | |
def stage = new Stage() | |
stage.initOwner(QuPathGUI.getInstance().getStage()) | |
stage.setScene(new Scene( gridPane)) | |
stage.setTitle("Simple Classifier for Multiple Classes ") | |
stage.setWidth(550); | |
stage.setHeight(300); | |
//stage.setResizable(false); | |
stage.show() | |
} | |
//Recursive function to keep track of what needs to be classified next. | |
//listAName is the current classifier name (for example Class 1 during the first pass) which gets modified with the intersect | |
//and would result in cells from this pass being called Class 1,Class2 positive. | |
//listA is the current list of cells being checked for intersection with the first member of... | |
//remainingListSize is the number of lists in "positive[]" that the current list needs to be checked against | |
//position keeps track of the starting position of listAName class. So on the first runthrough everything will start with C1 | |
//The next runthrough will start with position 2 since the base class will be C2 | |
void classifier (listAName, listA, remainingListSize, position = 0){ | |
//println("listofLists " +remainingListSize) | |
//println("base list size"+listA.size()) | |
for (def y=0; y <remainingListSize; y++){ | |
//println("listofLists in loop" +remainingListSize) | |
//println("y "+y) | |
//println("depth"+depth) | |
k = (position+y+1).intValue() | |
//println("k "+k) | |
// get the measurements needed to determine if a cell is a member of the next class (from listOfLists) | |
def lower = Float.parseFloat(lowerTs[k].getText()) | |
def upper = Float.parseFloat(upperTs[k].getText()) | |
//intersect the listA with the first of the listOfLists | |
//on the first run, this would take all of Class 1, and compare it with measurements that determine Class 2, resulting in a subset of | |
//Class 1 that meet both criteria | |
def passList = listA.findAll {measurement(it, boxes[k].getValue()) >= lower && measurement(it, boxes[k].getValue()) <= upper} | |
newName = classList[k].getText() | |
//Create a new name based off of the current name and the newly compared class | |
// on the first runthrough this would give "Class 1,Class 2 positive" | |
def mergeName = listAName+","+newName | |
//println("depth "+depth) | |
//println(mergeName+" with number of remaining lists "+remainingListSize) | |
passList.each{ | |
//Check if class being applies is "shorter" than the current class. | |
//This prevents something like "C2,C3" from overwriting "C1,C2,C3,C4" from the first call. | |
if (it.getMeasurementList().getMeasurementValue("ClassDepth") < depth) { | |
it.setPathClass(getPathClass(mergeName+' positive')); | |
it.getMeasurementList().putMeasurement("ClassDepth", depth) | |
} | |
} | |
if (k == (positive.size()-1)){ | |
//If we are comparing the current list to the last positive class list, we are done | |
//Go up one level of classifier depth and return | |
depth -=1 | |
return; | |
} else{ | |
//Otherwise, move one place further along the "positive" list of base classes, and increase depth | |
//This happens when going from C1,C2 to C1,C2,C3 etc. | |
def passAlong = remainingListSize-1 | |
//println("passAlong "+passAlong.size()) | |
//println("name for next " +mergeName) | |
depth +=1 | |
classifier(mergeName, passList, passAlong, k) | |
} | |
//println("loopy depth"+depth) | |
} | |
} |
//V3 Corrected classification over-write error on classifiers with more than 3 parts | |
import qupath.lib.gui.helpers.ColorToolsFX; | |
import javafx.scene.paint.Color; | |
//Hopefully you can simply replace the fileName with your classifier, and include this is a script. | |
fileName = "MyClassifier" | |
positive = [] | |
path = buildFilePath(PROJECT_BASE_DIR, "classifiers",fileName) | |
new File(path).withObjectInputStream { | |
cObj = it.readObject() | |
} | |
//Create an arraylist with the same number of entries as classes | |
CHANNELS = cObj.size() | |
//println(cObj) | |
//set up for classifier | |
def cells = getDetectionObjects() | |
cells.each {it.setPathClass(getPathClass('Negative'))} | |
//start classifier with all cells negative | |
for (def i=0; i<CHANNELS; i++){ | |
def lower = Float.parseFloat(cObj[i][1]) | |
def upper = Float.parseFloat(cObj[i][3]) | |
//create lists for each measurement, classify cells based off of those measurements | |
positive[i] = cells.findAll {measurement(it, cObj[i][0]) >= lower && measurement(it, cObj[i][0]) <= upper} | |
positive[i].each {it.setPathClass(getPathClass(cObj[i][2]+' positive')); it.getMeasurementList().putMeasurement("ClassDepth", 1)} | |
c = Color.web(cObj[i][4]) | |
currentPathClass = getPathClass(cObj[i][2]+' positive') | |
//for some reason setColor needs to be used here instead of setColorRGB which applies to objects and not classes? | |
currentPathClass.setColor(ColorToolsFX.getRGBA(c)) | |
} | |
for (def i=0; i<(CHANNELS-1); i++){ | |
//println(i) | |
int remaining = 0 | |
for (def j = i+1; j<CHANNELS; j++){ | |
remaining +=1 | |
} | |
depth = 2 | |
classifier(cObj[i][2], positive[i], remaining, i) | |
} | |
fireHierarchyUpdate() | |
def classifier (listAName, listA, remainingListSize, position){ | |
//current point in the list of lists, allows access to the measurements needed to figure out what from the current class is also part of the next class | |
for (def y=0; y <remainingListSize; y++){ | |
k = (position+y+1).intValue() | |
// get the measurements needed to determine if a cell is a member of the next class (from listOfLists) | |
def lower = Float.parseFloat(cObj[k][1]) | |
def upper = Float.parseFloat(cObj[k][3]) | |
//intersect the listA with the first of the listOfLists | |
//on the first run, this would take all of Class 1, and compare it with measurements that determine Class 2, resulting in a subset of | |
//Class 1 that meet both criteria | |
def passList = listA.findAll {measurement(it, cObj[k][0]) >= lower && measurement(it, cObj[k][0]) <= upper} | |
newName = cObj[k][2] | |
//Create a new name based off of the current name and the newly compared class | |
// on the first runthrough this would give "Class 1,Class 2 positive" | |
def mergeName = listAName+","+newName | |
passList.each{ | |
if (it.getMeasurementList().getMeasurementValue("ClassDepth") < depth) { | |
it.setPathClass(getPathClass(mergeName+' positive')); | |
it.getMeasurementList().putMeasurement("ClassDepth", depth) | |
} | |
} | |
if (k == (positive.size()-1)){ | |
//println(passList.size()+"number of "+mergeName+" cells passed") | |
for (def z=0; z<CHANNELS; z++){ | |
//println("before"+positive[z].size()) | |
positive[z] = positive[z].minus(passList) | |
//println(z+" after "+positive[z].size()) | |
} | |
depth -=1 | |
return; | |
} else{ | |
def passAlong = remainingListSize-1 | |
//println("passAlong "+passAlong.size()) | |
//println("name for next " +mergeName) | |
depth +=1 | |
classifier(mergeName, passList, passAlong, k) | |
} | |
} | |
} |
//V3 Corrected classification over-write error on classifiers with more than 3 parts | |
import qupath.lib.gui.helpers.ColorToolsFX; | |
import javafx.scene.paint.Color; | |
//Hopefully you can simply replace the fileName with your classifier, and include this is a script. | |
fileName = "MyClassifier" | |
positive = [] | |
path = buildFilePath(PROJECT_BASE_DIR, "classifiers",fileName) | |
new File(path).withObjectInputStream { | |
cObj = it.readObject() | |
} | |
//Create an arraylist with the same number of entries as classes | |
CHANNELS = cObj.size() | |
//println(cObj) | |
//set up for classifier | |
def cells = getCellObjects() | |
cells.each {it.setPathClass(getPathClass('Negative'))} | |
//start classifier with all cells negative | |
for (def i=0; i<CHANNELS; i++){ | |
def lower = Float.parseFloat(cObj[i][1]) | |
def upper = Float.parseFloat(cObj[i][3]) | |
//create lists for each measurement, classify cells based off of those measurements | |
positive[i] = cells.findAll {measurement(it, cObj[i][0]) >= lower && measurement(it, cObj[i][0]) <= upper} | |
positive[i].each {it.setPathClass(getPathClass(cObj[i][2]+' positive')); it.getMeasurementList().putMeasurement("ClassDepth", 1)} | |
c = Color.web(cObj[i][4]) | |
currentPathClass = getPathClass(cObj[i][2]+' positive') | |
//for some reason setColor needs to be used here instead of setColorRGB which applies to objects and not classes? | |
currentPathClass.setColor(ColorToolsFX.getRGBA(c)) | |
} | |
for (def i=0; i<(CHANNELS-1); i++){ | |
//println(i) | |
int remaining = 0 | |
for (def j = i+1; j<CHANNELS; j++){ | |
remaining +=1 | |
} | |
depth = 2 | |
classifier(cObj[i][2], positive[i], remaining, i) | |
} | |
fireHierarchyUpdate() | |
def classifier (listAName, listA, remainingListSize, position){ | |
//current point in the list of lists, allows access to the measurements needed to figure out what from the current class is also part of the next class | |
for (def y=0; y <remainingListSize; y++){ | |
k = (position+y+1).intValue() | |
// get the measurements needed to determine if a cell is a member of the next class (from listOfLists) | |
def lower = Float.parseFloat(cObj[k][1]) | |
def upper = Float.parseFloat(cObj[k][3]) | |
//intersect the listA with the first of the listOfLists | |
//on the first run, this would take all of Class 1, and compare it with measurements that determine Class 2, resulting in a subset of | |
//Class 1 that meet both criteria | |
def passList = listA.findAll {measurement(it, cObj[k][0]) >= lower && measurement(it, cObj[k][0]) <= upper} | |
newName = cObj[k][2] | |
//Create a new name based off of the current name and the newly compared class | |
// on the first runthrough this would give "Class 1,Class 2 positive" | |
def mergeName = listAName+","+newName | |
passList.each{ | |
if (it.getMeasurementList().getMeasurementValue("ClassDepth") < depth) { | |
it.setPathClass(getPathClass(mergeName+' positive')); | |
it.getMeasurementList().putMeasurement("ClassDepth", depth) | |
} | |
} | |
if (k == (positive.size()-1)){ | |
//println(passList.size()+"number of "+mergeName+" cells passed") | |
for (def z=0; z<CHANNELS; z++){ | |
//println("before"+positive[z].size()) | |
positive[z] = positive[z].minus(passList) | |
//println(z+" after "+positive[z].size()) | |
} | |
depth -=1 | |
return; | |
} else{ | |
def passAlong = remainingListSize-1 | |
//println("passAlong "+passAlong.size()) | |
//println("name for next " +mergeName) | |
depth +=1 | |
classifier(mergeName, passList, passAlong, k) | |
} | |
} | |
} |
// DBScan implementation for QuPath 0.2.0 - Michael Nelson, May 2020 | |
// Based heavily on code from https://bhanuchander210.github.io/Tutorial-Machine-Learning-With-Groovy/ | |
// With major suggestion from Sara Mcardle | |
// Instigated by Colt Egelston | |
// Version 2.0 Added cluster size as a measurement | |
//Probably should read this stuff the second time after it doesn't work quite right the first time. | |
//////////////////////////////////////////////////////////////////////////////// | |
//micronsBetweenCentroids (which is converted into "eps" and minPts to adjust the behavior of the clustering. | |
//eps and minPts are as described in the DBSCAN wiki | |
//baseClasses to true if you want to ignore complex classes and use subclasses from multiplexing classifications | |
///////////////////////////////////////////////////////////////////////////////// | |
import org.apache.commons.math3.ml.clustering.DBSCANClusterer | |
import org.apache.commons.math3.ml.clustering.DoublePoint | |
//distance to search around an object for another centroid. | |
double micronsBetweenCentroids = 30.0 | |
//Minimum number of objects needed to be considered a cluster | |
int minPts = 5 | |
boolean baseClasses = false | |
double eps = micronsBetweenCentroids/getCurrentServer().getPixelCalibration().getPixelWidthMicrons() | |
print eps | |
//Get the classes you want to analyze. Avoids Negative and no class by default. | |
Set classSet = [] | |
List classList = [] | |
if (!baseClasses){ | |
for (object in getCellObjects()) { | |
c = object.getPathClass() | |
if (c != getPathClass("Negative")){ | |
classSet << c | |
} | |
} | |
classList.addAll(classSet.findAll{ | |
//If you only want one class, use it == getPathClass("MyClassHere") instead | |
it != null | |
}) | |
print classList | |
}else{ | |
for (object in getCellObjects()) { | |
parts = PathClassTools.splitNames(object.getPathClass()) | |
parts.each{ | |
if (it != "Negative"){ | |
classSet << it | |
} | |
} | |
} | |
classList.addAll(classSet.findAll{ | |
//If you only want one sub-class, use it == getPathClass("MyClassHere") instead | |
it != null | |
}) | |
} | |
classList.each{ c-> | |
//Storage for stuff we do later. points will hold the XY coordinates as DoublePoint objects | |
List<DoublePoint> points = new ArrayList<DoublePoint>() | |
//The Map allows us to use the DoublePoint to match the list of coordinates output by DBScan to the QuPath object | |
Map< DoublePoint, double> pointMap = [:] | |
//Get the objects of interest for this class or sub-class | |
if(baseClasses){ | |
batch = getDetectionObjects().findAll{it.getPathClass().toString().contains(c)} | |
text = c | |
}else{ | |
batch = getDetectionObjects().findAll{it.getPathClass() == c} | |
text = c.getName() | |
} | |
//print batch.size() | |
//Prep each object being analyzed for clustering. | |
batch.eachWithIndex{d,x-> | |
//create the unique identifier, if you want to look back at details | |
//d.getMeasurementList().putMeasurement("ID",(double)x) | |
//Reset previous cluster analyses for the given cell | |
d?.getMeasurementList().removeMeasurements("Cluster "+text) | |
//create the linker between the ID and the centroid | |
double [] point = [d.getROI().getCentroidX(), d.getROI().getCentroidY()] | |
DoublePoint dpoint = new DoublePoint(point) | |
//print dpoint | |
points[x] = dpoint | |
//Key point here, each index (cell in most cases) is tracked and matched to it's XY coordinates | |
pointMap[dpoint]= (double)x | |
} | |
//print points if you want to see them all | |
def showClosure = {detail -> | |
//println "Cluster : " + detail.cluster + " Point : " + detail.point + " Label : "+ detail.labels | |
//print "labels "+(int)detail.labels | |
//print "cluster"+detail.cluster | |
//this uses the label (the index from the "batch") to access the correct cell, and apply a measurement with the correct cluster number | |
batch[detail.labels]?.getMeasurementList()?.putMeasurement("Cluster "+text,detail.cluster ) | |
batch[detail.labels]?.getMeasurementList()?.putMeasurement("Cluster Size "+text,detail.clusterSize ) | |
} | |
//Main run statements | |
DBSCANClusterer DBScan = new DBSCANClusterer(eps, minPts) | |
collectDetails(DBScan.cluster(points), pointMap).each(showClosure) | |
} | |
print "Done!" | |
//Things from the website linked at the top that I messed with very little. | |
//Used to extract information from the result of DBScan, might be useful if I play with other kinds of clustering in the future. | |
List<ClusterDetail> collectDetails(def clusters, pointMap) | |
{ | |
List<ClusterDetail> ret = [] | |
clusters.eachWithIndex{ c, ci -> | |
c.getPoints().each { pnt -> | |
DoublePoint pt = pnt as DoublePoint | |
ret.add new ClusterDetail (ci +1 as Integer, pt, pointMap[pnt], c.getPoints().size()) | |
} | |
} | |
ret | |
} | |
class ClusterDetail | |
{ | |
int cluster | |
DoublePoint point | |
double labels | |
int clusterSize | |
ClusterDetail(int no, DoublePoint pt, double labs, int size) | |
{ | |
cluster = no; point= pt; labels = labs; clusterSize = size | |
} | |
}``` |
// Script to find high density areas of classified cells in QuPath v0.2.0-m8. Version 4. | |
// Expected input: Classified cells, normally within some kind of annotation that does not have the same class as the cells. | |
// Expected output: No change to initial cells or classifications, but the addition of classified annotations around hotspots | |
// Downstream: Add further measurements to annotations based on the density and percentages of classified cells? | |
// Script by Mike Nelson, 1/15/2020. | |
double smooth = 7.0 | |
import ij.plugin.filter.EDM | |
import ij.plugin.filter.RankFilters | |
import ij.process.Blitter | |
import ij.process.ByteProcessor | |
import ij.process.FloatProcessor | |
import qupath.imagej.processing.SimpleThresholding | |
import qupath.lib.objects.classes.PathClass | |
import qupath.lib.objects.classes.PathClassFactory | |
import qupath.lib.plugins.parameters.ParameterList | |
import qupath.imagej.processing.RoiLabeling; | |
import ij.gui.Wand; | |
import java.awt.Color | |
import ij.measure.Calibration; | |
import ij.IJ | |
import qupath.imagej.gui.IJExtension | |
import qupath.lib.regions.RegionRequest | |
import ij.ImagePlus; | |
import qupath.lib.regions.ImagePlane | |
import qupath.lib.objects.PathObjects | |
import ij.process.ImageProcessor; | |
import qupath.lib.objects.PathCellObject | |
import qupath.lib.roi.ShapeSimplifier | |
//Default values for dialog box, or values for running the script across a project. | |
int pixelDensity = 3 | |
double radiusMicrons = 20.0 | |
int minCells = 30 | |
boolean smoothing = true | |
boolean showHeatMap = false | |
//Collect some information from the user to use in the hotspot detection | |
/////////////////////////////////////////////////////////////////// | |
def params = new ParameterList() | |
.addIntParameter("minCells", "Minimum cell count", minCells, "cells", "Minimum number of cells in hotspot") | |
//.addDoubleParameter("pixelSizeMicrons", "Pixel size", pixelSizeMicrons, GeneralTools.micrometerSymbol(), "Choose downsampling-can break script on large images if not large enough") | |
.addIntParameter("density", "Density", pixelDensity, "Changes with the other variables, requires testing", "Integer values: lower pixel size requires lower density") | |
.addDoubleParameter("radiusMicrons", "Distance between cells", radiusMicrons, GeneralTools.micrometerSymbol(), "Usually roughly the distance between positive cell centroids") | |
.addBooleanParameter("smoothing", "Smoothing? ", smoothing, "Do you want smoothing") | |
.addBooleanParameter("heatmap", "Show Heatmap? ", showHeatMap, "Open a new window showing the heatmap. If ImageJ is already open, you can use that to look at pixel values") | |
if (!Dialogs.showParameterDialog("Parameters. WARNING, can be slow on large images with many clusters", params)) | |
return | |
radiusMicrons = params.getDoubleParameterValue("radiusMicrons") | |
minCells = params.getIntParameterValue("minCells") | |
pixelDensity = params.getIntParameterValue("density") | |
smoothing = params.getBooleanParameterValue("smoothing") | |
showHeatMap = params.getBooleanParameterValue("heatmap") | |
/////////////////////////////////////////////////////////////////// | |
//Comment out the entire section above and put the values you want in manually if you want to run the script "For Project" | |
int z = 0 | |
int t = 0 | |
def plane = ImagePlane.getPlane(z, t) | |
def imageData = getCurrentImageData() | |
def server = imageData.getServer() | |
def cells = getCellObjects() | |
pixelCount = server.getWidth()*server.getHeight() | |
downsample = Math.ceil(pixelCount/(double)500000000) | |
pixelSizeMicrons = downsample*server.getPixelCalibration().getAveragedPixelSizeMicrons() | |
int w = Math.ceil(server.getWidth() / downsample) | |
int h = Math.ceil(server.getHeight() / downsample) | |
int nPixels = w * h | |
double radius = radiusMicrons / pixelSizeMicrons | |
//println("downsample " + downsample) | |
//println("radius " +radius) | |
//Unsure about this part. Maybe it shouldnt start at 0,0 but should get the upper left pixel using the imageserver? | |
Calibration calIJ = new Calibration(); | |
calIJ.xOrigin = 0/downsample; | |
calIJ.yOrigin = 0/downsample; | |
//Find all classes | |
Set classSet = [] | |
for (object in getCellObjects()) { | |
classSet << object.getPathClass() | |
} | |
//convert classes into a list, which is ordered | |
/************************************************* | |
CLASS LIST MIGHT BE MODIFIABLE FOR MULTIPLEXING | |
*****************************************************/ | |
List classList = [] | |
classList.addAll(classSet.findAll{ | |
//If you only want one class, use it == getPathClass("MyClassHere") instead | |
it != getPathClass("Negative") | |
}) | |
removeObjects(getAnnotationObjects().findAll{(classList.contains(it.getPathClass()))},true) | |
print("Class list: "+ classList) | |
println("This part may be QUITE SLOW, with no apparent sign that it is working. Please wait for the 'Done' message.") | |
// Create centroid map | |
/***************************** | |
Create an array of floatprocessors per class | |
***************************/ | |
def fpList = [] | |
for (aClass in classList){ | |
fpArray = new FloatProcessor(w,h) | |
fpList << fpArray | |
} | |
def fpNegative = new FloatProcessor(w, h) | |
//////////////////////// Update valid mask | |
//Checking for areas to ignore (outside of annotations, near borders | |
ByteProcessor bpValid | |
def annotations = getAnnotationObjects() | |
if (annotations) { | |
//making an image instead of a byteprocessor | |
def imgMask = new BufferedImage(w, h, BufferedImage.TYPE_BYTE_GRAY) | |
//Not sure | |
def g2d = imgMask.createGraphics() | |
//scale the image down by the downsample | |
g2d.scale(1.0/downsample, 1.0/downsample) | |
//Whole image starts black, 0s, fill annotation with white, 255? | |
g2d.setColor(Color.WHITE) | |
for (annotation in annotations) { | |
def shape = annotation.getROI().getShape() | |
g2d.fill(shape) | |
} | |
g2d.dispose() | |
//ok, I think at this point we have a large white block defining the annotation are | |
bpValid = new ByteProcessor(imgMask) | |
bpValid = SimpleThresholding.thresholdAbove(new EDM().makeFloatEDM(bpValid, 0, true), radius/4 as float) | |
//Ok, now we have a distance transform from the edge of the annotation object... | |
//Ahah! One that is thresholded so that we don't look for hotspots near the edge of something. Not sure if I will want this behavior or not | |
} | |
//clear out the original annotations to make it easier to cycle through all new annotations | |
removeObjects(annotations,true) | |
/////////////////////////////////////////// | |
//Create cell count map | |
for (cell in cells) { | |
def roi = PathObjectTools.getROI(cell, true) | |
if (roi.isEmpty()) | |
continue | |
def pathClass = cell.getPathClass() | |
//Ignore unclassified cells | |
if (pathClass == null ) | |
continue | |
int x = (roi.getCentroidX() / downsample) as int | |
int y = (roi.getCentroidY() / downsample) as int | |
//find the pixel of the current roi center, and validate the position against the mask that checks for being too close to the border or outside of annotations | |
int check = w*y+x | |
//This is where the fpList[] starts to get information from individual classes | |
//add 1 pixel value to the fpList equivalent to the class of the cell | |
//After this, each fpList object should be an array that shows COUNTS for cells within an area determined by downsampling | |
//Make sure we are writing to the correct position in fpList | |
for (i=0; i<classList.size(); i++){ | |
if (pathClass == classList[i]){ | |
if (bpValid.getf(check) != 0f){ | |
fpList[i].setf(x, y, fpList[i].getf(x, y) + 1f as float) | |
} | |
} | |
} | |
if (PathClassTools.isNegativeClass(pathClass) && bpValid.getf(check) != 0f) | |
fpNegative.setf(x, y, fpNegative.getf(x, y) + 1f as float) | |
} | |
//At this point we have cycled through all of the cells and built N heatmaps, though they are downsampled | |
//////////////////////////////////////////////////////////// | |
// In this section we create a mean filter to run across our downsampled density map, using the radius given by the user. | |
// This, along with the downsample, will fill in the spaces between cells | |
def rf = new RankFilters() | |
//Get an odd diameter so that there is a center | |
int dim = Math.ceil(radius * 2 + 5) | |
def fpTemp = new FloatProcessor(dim, dim) | |
//generate an empty square (0s) with R^2 as the center pixel value | |
fpTemp.setf(dim/2 as int, dim/2 as int, radius * radius as float) | |
//spread the radius squared across a circle using the euclidean distance, radius | |
rf.rank(fpTemp, radius, RankFilters.MEAN) | |
def pixels = fpTemp.getPixels() as float[] | |
//count the number of pixels within fpTemp that will actually be used by RankFilters.Mean when passing "radius" | |
double n = Arrays.stream(pixels).filter({f -> f > 0}).count() | |
// Compute sum of elements | |
//rankfilter is used to run a mean filter across the fpTemp area | |
/*######## NEED TO MAKE THIS ONLY USE INTERESTING CLASSES ###########*/ | |
fpList.each{ | |
rf.rank(it, radius, RankFilters.MEAN) | |
it.multiply(n) | |
} | |
//Here we take the mean-filtered density maps, apply the user's density threshold, | |
for (l=0; l<fpList.size(); l++){ | |
//create a mask based on the user threshold | |
hotspotMaskMap = SimpleThresholding.thresholdAbove(fpList[l], (float)pixelDensity) | |
//not 100% sure how this line worked, but it was necessary for the getFilledPolygonROIs to function | |
hotspotMaskMap.setThreshold(1, ImageProcessor.NO_THRESHOLD, ImageProcessor.NO_LUT_UPDATE) | |
//use the mask to generate ROIs that surround 4 connected points (not diagonals) | |
hotspotROIs = RoiLabeling.getFilledPolygonROIs(hotspotMaskMap, Wand.FOUR_CONNECTED); | |
//print(hotspotROIs.size()) | |
allqupathROIs = [] | |
qupathROIs = [] | |
//convert the ImageJ ROIs to QuPath ROIs | |
hotspotROIs.each{allqupathROIs << IJTools.convertToROI(it, calIJ, downsample, plane)} | |
//Use the QuPath ROIs to generate annotation objects (possibly smoothed), out of the heatmap ROIs | |
objects = [] | |
qupathROIs = allqupathROIs.findAll{it.getArea() > (radiusMicrons*radiusMicrons/(server.getPixelCalibration().getAveragedPixelSizeMicrons()*server.getPixelCalibration().getAveragedPixelSizeMicrons()))} | |
smoothedROIs = [] | |
qupathROIs.each{smoothedROIs << ShapeSimplifier.simplifyShape(it, downsample*2)} | |
//println("sizes "+ qupathROIs.size) | |
smoothedROIs.each{objects << PathObjects.createAnnotationObject(it, classList[l]);} | |
addObjects(objects) | |
} | |
resolveHierarchy() | |
//remove small hotspots | |
getAnnotationObjects().each{ | |
currentClass = it.getPathClass() | |
if (classList.contains(it.getPathClass())){ | |
count = [] | |
qupath.lib.objects.PathObjectTools.getDescendantObjects(it,count, PathCellObject) | |
count = count.findAll{cell -> cell.getPathClass() == currentClass} | |
if (count.size < minCells){ | |
//print count.size | |
removeObject(it,true) | |
} | |
} | |
} | |
Set hotSpotClassList = [] | |
for (object in getAnnotationObjects()) { | |
hotSpotClassList << object.getPathClass() | |
} | |
IJExtension.getImageJInstance() | |
if (showHeatMap){ | |
for (l=0; l<fpList.size(); l++){ | |
if (hotSpotClassList.contains(classList[l])){ | |
new ImagePlus(classList[l].toString()+" heatmap at "+ pixelSizeMicrons+ "um pixel size", fpList[l]).show() | |
} | |
} | |
} | |
if(smoothing){ | |
before = getAnnotationObjects() | |
selectAnnotations() | |
runPlugin('qupath.lib.plugins.objects.DilateAnnotationPlugin', '{"radiusMicrons": '+smooth+', "lineCap": "Round", "removeInterior": false, "constrainToParent": false}'); | |
removeObjects(before,true) | |
expanded = getAnnotationObjects() | |
selectAnnotations() | |
runPlugin('qupath.lib.plugins.objects.DilateAnnotationPlugin', '{"radiusMicrons": '+(-1*smooth)+', "lineCap": "Round", "removeInterior": false, "constrainToParent": false}'); | |
removeObjects(expanded,true) | |
resetSelection(); | |
} | |
//return the original annotations | |
addObjects(annotations) | |
resolveHierarchy() | |
getAnnotationObjects().each{it.setLocked(true)} | |
println("Done") |
//Description of use found here: https://petebankhead.github.io/qupath/scripts/2018/08/08/three-regions.html | |
/** | |
* Script to help with annotating tumor regions, separating the tumor margin from the center. | |
* | |
* Here, each of the margin regions is approximately 500 microns in width. | |
* | |
* @author Pete Bankhead | |
*/ | |
import qupath.lib.common.GeneralTools | |
import qupath.lib.objects.PathAnnotationObject | |
import qupath.lib.objects.PathObject | |
import qupath.lib.roi.PathROIToolsAwt | |
import java.awt.Rectangle | |
import java.awt.geom.Area | |
import static qupath.lib.scripting.QPEx.* | |
//----- | |
// Some things you might want to change | |
// How much to expand each region | |
double expandMarginMicrons = 500.0 | |
// Define the colors | |
def coloInnerMargin = getColorRGB(0, 0, 200) | |
def colorOuterMargin = getColorRGB(0, 200, 0) | |
def colorCentral = getColorRGB(0, 0, 0) | |
// Choose whether to lock the annotations or not (it's generally a good idea to avoid accidentally moving them) | |
def lockAnnotations = true | |
//----- | |
// Extract the main info we need | |
def imageData = getCurrentImageData() | |
def hierarchy = imageData.getHierarchy() | |
def server = imageData.getServer() | |
// We need the pixel size | |
if (!server.hasPixelSizeMicrons()) { | |
print 'We need the pixel size information here!' | |
return | |
} | |
if (!GeneralTools.almostTheSame(server.getPixelWidthMicrons(), server.getPixelHeightMicrons(), 0.0001)) { | |
print 'Warning! The pixel width & height are different; the average of both will be used' | |
} | |
// Get annotation & detections | |
def annotations = getAnnotationObjects() | |
def selected = getSelectedObject() | |
if (selected == null || !selected.isAnnotation()) { | |
print 'Please select an annotation object!' | |
return | |
} | |
// We need one selected annotation as a starting point; if we have other annotations, they will constrain the output | |
annotations.remove(selected) | |
// If we have at most one other annotation, it represents the tissue | |
Area areaTissue | |
PathObject tissueAnnotation | |
if (annotations.isEmpty()) { | |
areaTissue = new Area(new Rectangle(0, 0, server.getWidth(), server.getHeight())) | |
} else if (annotations.size() == 1) { | |
tissueAnnotation = annotations.get(0) | |
areaTissue = PathROIToolsAwt.getArea(tissueAnnotation.getROI()) | |
} else { | |
print 'Sorry, this script only support one selected annotation for the tumor region, and at most one other annotation to constrain the expansion' | |
return | |
} | |
// Calculate how much to expand | |
double expandPixels = expandMarginMicrons / server.getAveragedPixelSizeMicrons() | |
def roiOriginal = selected.getROI() | |
def areaTumor = PathROIToolsAwt.getArea(roiOriginal) | |
// Get the outer margin area | |
def areaOuter = PathROIToolsAwt.shapeMorphology(areaTumor, expandPixels) | |
areaOuter.subtract(areaTumor) | |
areaOuter.intersect(areaTissue) | |
def roiOuter = PathROIToolsAwt.getShapeROI(areaOuter, roiOriginal.getC(), roiOriginal.getZ(), roiOriginal.getT()) | |
def annotationOuter = new PathAnnotationObject(roiOuter) | |
annotationOuter.setName("Outer margin") | |
annotationOuter.setColorRGB(colorOuterMargin) | |
// Get the central area | |
def areaCentral = PathROIToolsAwt.shapeMorphology(areaTumor, -expandPixels) | |
areaCentral.intersect(areaTissue) | |
def roiCentral = PathROIToolsAwt.getShapeROI(areaCentral, roiOriginal.getC(), roiOriginal.getZ(), roiOriginal.getT()) | |
def annotationCentral = new PathAnnotationObject(roiCentral) | |
annotationCentral.setName("Center") | |
annotationCentral.setColorRGB(colorCentral) | |
// Get the inner margin area | |
areaInner = areaTumor | |
areaInner.subtract(areaCentral) | |
areaInner.intersect(areaTissue) | |
def roiInner = PathROIToolsAwt.getShapeROI(areaInner, roiOriginal.getC(), roiOriginal.getZ(), roiOriginal.getT()) | |
def annotationInner = new PathAnnotationObject(roiInner) | |
annotationInner.setName("Inner margin") | |
annotationInner.setColorRGB(coloInnerMargin) | |
// Add the annotations | |
hierarchy.getSelectionModel().clearSelection() | |
hierarchy.removeObject(selected, true) | |
def annotationsToAdd = [annotationOuter, annotationInner, annotationCentral]; | |
annotationsToAdd.each {it.setLocked(lockAnnotations)} | |
if (tissueAnnotation == null) { | |
hierarchy.addPathObjects(annotationsToAdd, false) | |
} else { | |
tissueAnnotation.addPathObjects(annotationsToAdd) | |
hierarchy.fireHierarchyChangedEvent(this, tissueAnnotation) | |
if (lockAnnotations) | |
tissueAnnotation.setLocked(true) | |
} |
//Not well commented, but the overall purpose of this script is to | |
//1. detect tissue in a brightfield image | |
//2. send the tissue to ImageJ | |
//2.5 you may need to cut up your tissue if the image is too large using something like | |
//runPlugin('qupath.lib.algorithms.TilerPlugin', '{"tileSizePx": 10000.0, "trimToROI": true, "makeAnnotations": true, "removeParentAnnotation": true}'); | |
//at which point you will also need to place selectAnnotations() mergeSelectedAnnotations() before doing the calculations at the end | |
//3. Using an ImageJ macro (with all comments and newline characters removed) threshold and find all empty spots | |
//The values for this will depend greatly on image quality, background, and brightness. You may also want to adjust size thresholds for the Analyze Particles... command | |
//4. Once the detection objects are returned, sum up their areas, and divide by the total parent annotation area to find a percentage lipid area | |
// Since all of the detections exist as objects, you could also perform other analyses of them by adding circularity measurements etc. (Calculate Features/Add Shape Features) | |
import qupath.imagej.plugins.ImageJMacroRunner | |
import qupath.lib.plugins.parameters.ParameterList | |
runPlugin('qupath.imagej.detect.tissue.SimpleTissueDetection2', '{"threshold": 161, "requestedDownsample": 1.0, "minAreaPixels": 1.0E7, "maxHoleAreaPixels": 25500.0, "darkBackground": false, "smoothImage": true, "medianCleanup": true, "dilateBoundaries": false, "smoothCoordinates": true, "excludeOnBoundary": false, "singleAnnotation": true}'); | |
// Create a macro runner so we can check what the parameter list contains | |
def params = new ImageJMacroRunner(getQuPath()).getParameterList() | |
print ParameterList.getParameterListJSON(params, ' ') | |
// Change the value of a parameter, using the JSON to identify the key | |
params.getParameters().get('downsampleFactor').setValue(1.0 as double) | |
params.getParameters().get('getOverlay').setValue(true) | |
params.getParameters().get('clearObjects').setValue(true) | |
print ParameterList.getParameterListJSON(params, ' ') | |
// Get the macro text and other required variables | |
def macro = 'min=newArray(3);max=newArray(3);filter=newArray(3);a=getTitle();run("HSB Stack");run("Convert Stack to Images");selectWindow("Hue");rename("0");selectWindow("Saturation");rename("1");selectWindow("Brightness");rename("2");min[0]=9;max[0]=255;filter[0]="pass";min[1]=0;max[1]=45;filter[1]="pass";min[2]=136;max[2]=255;filter[2]="pass";for (i=0;i<3;i++){ selectWindow(""+i); setThreshold(min[i], max[i]); run("Convert to Mask"); if (filter[i]=="stop") run("Invert");}imageCalculator("AND create", "0","1");imageCalculator("AND create", "Result of 0","2");for (i=0;i<3;i++){ selectWindow(""+i); close();}selectWindow("Result of 0");close();selectWindow("Result of Result of 0");rename(a);run("Smooth");run("Smooth");run("Smooth");run("Smooth");run("Smooth");run("Smooth");run("Smooth");run("Make Binary");run("Despeckle");run("Despeckle");run("Analyze Particles...", "size=200-19500 show=[Overlay Masks] add");' | |
def imageData = getCurrentImageData() | |
//*********************************************************************************************** | |
//YOU MAY NEED TO CREATE TILE ANNOTATIONS HERE DEPENDING ON THE SIZE OF YOUR IMAGE + REMOVE PARENT | |
//I recommend the largest tiles you can possibly get away with since they will disrupt adipocyte detection | |
//******************************************************************************************* | |
def annotations = getAnnotationObjects() | |
// Loop through the annotations and run the macro | |
for (annotation in annotations) { | |
ImageJMacroRunner.runMacro(params, imageData, null, annotation, macro) | |
} | |
//selectAnnotations()+mergeSelectedObjects() here if needed. | |
selected = getAnnotationObjects() | |
removeObject(selected[0], true) | |
addObject(selected[0]) | |
selected[0].setLocked(true) | |
selectObjects{p -> (p.getLevel()==1) && (p.isAnnotation() == false)}; | |
clearSelectedObjects(false); | |
import qupath.lib.objects.PathDetectionObject | |
hierarchy = getCurrentHierarchy() | |
for (annotation in getAnnotationObjects()){ | |
//Block 1 | |
def tiles = hierarchy.getDescendantObjects(annotation,null, PathDetectionObject) | |
double totalArea = 0 | |
for (def tile in tiles){ | |
totalArea += tile.getROI().getArea() | |
} | |
annotation.getMeasurementList().putMeasurement("Marked area px", totalArea) | |
def annotationArea = annotation.getROI().getArea() | |
annotation.getMeasurementList().putMeasurement("Marked area %", totalArea/annotationArea*100) | |
} | |
print 'Done!' |
// Set of scripts for running multiple cell detections in sequence. Can be used as many times as needed, though it might be best | |
//to use different detection file names for each iteration through the scripts | |
//Updated 24/2/19 for to ensure directory exists | |
//STEP 1 | |
selectAnnotations() | |
//YOUR CELL DETECTION LINE HERE | |
mkdirs(buildFilePath(PROJECT_BASE_DIR, 'detection object files')) | |
def path = buildFilePath(PROJECT_BASE_DIR, 'detection object files', getCurrentImageData().getServer().getShortServerName()+' objects') | |
def detections = getCellObjects() //.collect {new qupath.lib.objects.PathCellObject(it.getROI(), it.getPathClass())} | |
new File(path).withObjectOutputStream { | |
it.writeObject(detections) | |
} | |
print 'Done!' | |
//STEP2 | |
//Run another cell detection | |
//STEP3 | |
def path = buildFilePath(PROJECT_BASE_DIR, 'detection object files', getCurrentImageData().getServer().getShortServerName()+' objects') | |
def detections = null | |
new File(path).withObjectInputStream { | |
detections = it.readObject() | |
} | |
addObjects(detections) | |
fireHierarchyUpdate() | |
print 'Added ' + detections | |
//STEP4 | |
//Check for overlapping cells. This script simply eliminates smaller cells within larger cells, but this may not always be the | |
//criterion you want to use. Adjust as necessary | |
hierarchy = getCurrentHierarchy() | |
def parentCellsList = [] | |
getAnnotationObjects().each{ parentCellsList << it.getChildObjects().findAll{p->p.getChildObjects().size()>0} } | |
parentCellsList.each{ | |
it.each{ | |
removeObjects(it.getChildObjects(), false) | |
} | |
} | |
fireHierarchyUpdate() | |
print "Done" |
//Seems to work for Version 1.2, will not work with 1.3 and future builds | |
//Creating tiled areas and summing them for area based measurements. | |
//Setup functions, adjust to taste for negative detection | |
//runPlugin('qupath.imagej.detect.tissue.SimpleTissueDetection2', '{"threshold": 204, "requestedPixelSizeMicrons": 3.0, "minAreaMicrons": 5000000.0, "maxHoleAreaMicrons": 500.0, "darkBackground": false, "smoothImage": false, "medianCleanup": false, "dilateBoundaries": false, "smoothCoordinates": false, "excludeOnBoundary": false, "singleAnnotation": true}'); | |
//This line should almost always be run first and then manually checked for accuracy of stain/artifacts. | |
//Choose your color deconvolutions here, I named them RED and BROWN for simplicity | |
//For this script, THE STAIN YOU ARE LOOKING FOR SHOULD COME FIRST - Stain 1 is Red for a PicroSirius Red detection | |
setColorDeconvolutionStains('{"Name" : "Red with background", "Stain 1" : "RED", "Values 1" : "0.25052 0.76455 0.59388 ", "Stain 2" : "BROWN", "Values 2" : "0.22949 0.5595 0.79642 ", "Background" : " 255 255 255 "}') | |
//Subdivide your annotation area into tiles | |
selectAnnotations(); | |
runPlugin('qupath.lib.algorithms.TilerPlugin', '{"tileSizeMicrons": 1000.0, "trimToROI": true, "makeAnnotations": true, "removeParentAnnotation": false}'); | |
//Perform the positive pixel detection on the tiles, but not the larger annotation | |
def tiles = getAnnotationObjects().findAll {it.getDisplayedName().toString().contains('Tile') == true} | |
getCurrentHierarchy().getSelectionModel().setSelectedObjects(tiles, null) | |
runPlugin('qupath.imagej.detect.tissue.PositivePixelCounterIJ', '{"downsampleFactor": 1, "gaussianSigmaMicrons": 0.3, "thresholdStain1": 0.2, "thresholdStain2": 0.1, "addSummaryMeasurements": true}'); | |
//Sum the areas of each tile | |
def total_Negative = 0 | |
for (tile in tiles){ | |
total_Negative += tile.getMeasurementList().getMeasurementValue("Negative pixel count") | |
} | |
//summary[0] should be the original annotation, this assumes that there was only one original annotation | |
def summary = getAnnotationObjects().findAll {it.getDisplayedName().toString().contains('Tile') != true} | |
summary[0].getMeasurementList().putMeasurement("Negative Pixel Sum", total_Negative) | |
def total_area = summary[0].getROI().getArea() | |
summary[0].getMeasurementList().putMeasurement("Percentage PSR Positive", total_Negative/total_area*100) | |
//Remove all of the tile annotations which would result in less readable output than a single tissue value | |
removeObjects(tiles,true) | |
//The following goes at the end of basically any script that ends with useful measurements that are part of the Annotation | |
/* | |
* QuPath v0.1.2 has some bugs that make exporting annotations a bit annoying, specifically it doesn't include the 'dot' | |
* needed in the filename if you run it in batch, and it might put the 'slashes' the wrong way on Windows. | |
* Manually fixing these afterwards is not very fun. | |
* | |
* Anyhow, until this is fixed you could try the following script with Run -> Run for Project. | |
* It should create a new subdirectory in the project, and write text files containing results there. | |
* | |
* @author Pete Bankhead | |
*/ | |
def name = getProjectEntry().getImageName() + '.txt' | |
def path = buildFilePath(PROJECT_BASE_DIR, 'annotation results') | |
mkdirs(path) | |
path = buildFilePath(path, name) | |
saveAnnotationMeasurements(path) | |
print 'Results exported to ' + path |
//QUPATH VERSION 1.3- Does NOT work with 1.2 | |
//Creating tiled areas and summing them for area based measurements applied to the original tissue annotation. Assumes 1 annotation, but could be expanded to handle multiple. | |
//runPlugin('qupath.imagej.detect.tissue.SimpleTissueDetection2', '{"threshold": 204, "requestedPixelSizeMicrons": 3.0, "minAreaMicrons": 5000000.0, "maxHoleAreaMicrons": 500.0, "darkBackground": false, "smoothImage": false, "medianCleanup": false, "dilateBoundaries": false, "smoothCoordinates": false, "excludeOnBoundary": false, "singleAnnotation": true}'); | |
//This line should almost always be run first, and then manually checked for accuracy of stain/artifacts. | |
server = getCurrentImageData().getServer() | |
//Choose your color deconvolutions here, I named them RED and BROWN for simplicity | |
//For this script, THE STAIN YOU ARE LOOKING FOR SHOULD COME FIRST - Stain 1 is Red for a PicroSirius Red detection | |
setColorDeconvolutionStains('{"Name" : "Red with background", "Stain 1" : "RED", "Values 1" : "0.25052 0.76455 0.59388 ", "Stain 2" : "BROWN", "Values 2" : "0.22949 0.5595 0.79642 ", "Background" : " 255 255 255 "}') | |
//Subdivide your annotation area into tiles | |
selectAnnotations(); | |
runPlugin('qupath.lib.algorithms.TilerPlugin', '{"tileSizeMicrons": 1000.0, "trimToROI": true, "makeAnnotations": true, "removeParentAnnotation": false}'); | |
//Perform the positive pixel detection on the tiles, but not the larger annotation | |
def tiles = getAnnotationObjects().findAll {it.getDisplayedName().toString().contains('Tile') == true} | |
getCurrentHierarchy().getSelectionModel().setSelectedObjects(tiles, null) | |
runPlugin('qupath.imagej.detect.tissue.PositivePixelCounterIJ', '{"downsampleFactor": 1, "gaussianSigmaMicrons": 0.3, "thresholdStain1": 0.2, "thresholdStain2": 0.1, "addSummaryMeasurements": true}'); | |
//Calculate the percentage of "negative" positive pixels, and apply that to the original tissue annotation | |
def total_Negative = 0 | |
def total_Positive = 0 | |
//Sum the areas of each tile | |
for (tile in tiles){ | |
total_Negative += tile.getMeasurementList().getMeasurementValue("Negative pixel area µm^2") | |
total_Positive += tile.getMeasurementList().getMeasurementValue("Positive pixel area µm^2") | |
} | |
//summary[0] should be the original annotation, this assumes that there was only one original annotation | |
def summary = getAnnotationObjects().findAll {it.getDisplayedName().toString().contains('Tile') != true} | |