This example demonstrates how to do segment brain tumor using Nvidia's AI-assisted annotation tool in batch mode (without GUI, using qMRMLSegmentEditorWidget) using 3D Slicer
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
# Load/generate input data | |
################################################ | |
# Load master volume | |
import SampleData | |
sampleDataLogic = SampleData.SampleDataLogic() | |
masterVolumeNode = sampleDataLogic.downloadMRBrainTumor1() | |
# Define boundary points | |
import numpy as np | |
inputPoints = np.array([[ 0.4685 , 49.68717985, 25.36491174], | |
[ 0.4685 , 30.92409445, 53.85700439], | |
[ 0.4685 , 7.29650543, 34.3989899 ], | |
[ 0.4685 , 28.83930719, 6.60182634], | |
[ 17.6458556 , 24.98187703, 30.92434445], | |
[-22.80139768, 26.96458552, 30.92434445]]) | |
aiaaModelName = "annotation_mri_brain_tumors_t2_wt" | |
# Run filter | |
################################################ | |
# Create segmentation node to store the segmentation result | |
segmentationNode = slicer.mrmlScene.AddNewNodeByClass("vtkMRMLSegmentationNode") | |
segmentationNode.CreateDefaultDisplayNodes() | |
segmentationNode.GetSegmentation().AddEmptySegment() | |
# Create segment editor to get access to effects | |
segmentEditorWidget = slicer.qMRMLSegmentEditorWidget() | |
segmentEditorWidget.setMRMLScene(slicer.mrmlScene) | |
segmentEditorNode = slicer.vtkMRMLSegmentEditorNode() | |
slicer.mrmlScene.AddNode(segmentEditorNode) | |
segmentEditorWidget.setMRMLSegmentEditorNode(segmentEditorNode) | |
segmentEditorWidget.setSegmentationNode(segmentationNode) | |
segmentEditorWidget.setMasterVolumeNode(masterVolumeNode) | |
# Run segmentation | |
segmentEditorWidget.setActiveEffectByName("Nvidia AIAA") | |
effect = segmentEditorWidget.activeEffect() | |
# optionally, custom server address can be set like this: | |
# effect.self().ui.serverComboBox.currentText = serverUrl | |
# effect.self().onClickFetchModels() | |
effect.self().ui.annotationModelSelector.currentText = aiaaModelName | |
# This is only needed for annotation (not for automatic segmentation) | |
slicer.util.updateMarkupsControlPointsFromArray(effect.self().annotationFiducialNode, inputPoints) | |
# for automatic segmentation use: effect.self().onClickSegmentation() | |
effect.self().onClickAnnotation() | |
# Clean up and show results | |
################################################ | |
# Clean up | |
slicer.mrmlScene.RemoveNode(segmentEditorNode) | |
# Show result visible in 3D | |
segmentationNode.CreateClosedSurfaceRepresentation() |
Thanks for the suggestion. This example shows how simple it is to use AIAA Slicer plugin if data is already in 3D Slicer. It is really just a few lines of trivial code (the rest of the code is just setting up the example - downloading sample data, etc.):
segmentEditorWidget.setActiveEffectByName("Nvidia AIAA")
effect = segmentEditorWidget.activeEffect()
effect.self().ui.annotationModelSelector.currentText = aiaaModelName
slicer.util.updateMarkupsControlPointsFromArray(effect.self().annotationFiducialNode, inputPoints)
effect.self().onClickAnnotation()
The underlying low-level AIAA client API is nice, too, but probably you would need 5-10x more code to get all the data from the Slicer scene, run the processing, and put back results into the Slicer scene.
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
you can directly use AIAA python client to get everything done.. you don't need slicer behind the scene.
Reference:
https://github.com/NVIDIA/ai-assisted-annotation-client/blob/master/py_client/client_api.py
https://github.com/NVIDIA/ai-assisted-annotation-client/blob/master/py_client/test_aiaa_server.py#L117-L179