Skip to content

Instantly share code, notes, and snippets.

@agirault
Last active August 31, 2017 20:36
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save agirault/971c81bb8e033be9957bd68f3223e239 to your computer and use it in GitHub Desktop.
Save agirault/971c81bb8e033be9957bd68f3223e239 to your computer and use it in GitHub Desktop.
Reslice vtkImageData from vtkDICOMReader using patient matrix
// Directory
const char *folderASCII = [folder cStringUsingEncoding:NSASCIIStringEncoding];
auto dicomDir = vtkSmartPointer<vtkDICOMDirectory>::New();
dicomDir->SetDirectoryName(folderASCII);
dicomDir->Update();
// Reader
auto reader = vtkSmartPointer<vtkDICOMReader>::New();
reader->SetFileNames(dicomDir->GetFileNamesForSeries(0));
//reader->SetMemoryRowOrderToFileNative();
reader->Update();
vtkImageData *data = reader->GetOutput();
// Patient Matrix
auto matrixInv = vtkSmartPointer<vtkMatrix4x4>::New();
vtkMatrix4x4::Invert(reader->GetPatientMatrix(), matrixInv);
// Transform
auto transformFilter = vtkSmartPointer<vtkImageReslice>::New();
transformFilter->SetInputData(data);
transformFilter->SetResliceAxes(matrixInv);
transformFilter->SetInterpolationModeToLinear();
transformFilter->Update();
// Final image
data = transformFilter->GetOutput();
@agirault
Copy link
Author

agirault commented Aug 2, 2017

Applying the PatientMatrix on the mapper (UserMatrix)

@dgobbi: Generally, I set the UserMatrix to the PatientMatrix and let the mapper figure out what to do (the vtkImageResliceMapper uses a combination of some CPU operations and some GPU). This works well if you have multiple data sets, each with a different PatientMatrix.

@agirault: But then don't you also need to apply PatientMatrix to the camera if you slice based on its focal point? I currently have issues when sliding the focalpoint along the slices where the view just dissapears.

@dgobbi: Yes, you also need to properly set the position & focal point of the camera. So it's easiest to figure out the desired position with respect to the vtkImageData's data coordinates, and then apply the PatientMatrix to get the position in patient/world coords. That's the main reason that I made my suggestion below:

Applying the inverse of the PatientMatrix on the pointsets

@dgobbi: However, if you have only one data set and if you want to keep things simple, then I suggest that you do things the other way around: instead of applying the PatientMatrix to the image, you could get the inverse of the PatientMatrix and apply that to the surfaces (e.g. with vtkTransformPolyDataFilter, or simply by setting the UserMatrix of the vtkActor to the inverse of the PatientMatrix). Applying a transformation to a surface is an efficient and virtually lossless operation, whether it is performed on the CPU or the GPU.

@agirault: Makes sense, however, I would like the datasets to be correctly oriented in space using the same orientation widget for my different views in LPS. If I move the surfaces instead I kind of lose that.

@dgobbi: Applying a transformation to a surface is an efficient and virtually lossless operation, whether it is performed on the CPU or the GPU. I would not recommend using vtkImageReslice to transform the full image volume. There are situations where such a transformation is necessary (e.g. for image fusion), but for basic visualization it's best to let the mapper/actor perform any transformations that are needed.

@agirault: Are there any other major issues apart from the performance? I'm aware than resampling the image is much slower than transforming a point set, but since I only do this once at load time I assume that the performance wouldn't affect my use case.

Applying the PatientMatrix on the data (ImageReslice)

@dgobbi: For your code, the reason it fails is that vtkImageReslice needs the inverse of the PatientMatrix. The way that vtkImageReslice works is that it resamples the image onto a new point grid. So when you use vtkImageReslice, you define an "output" point grid by setting the OutputSpacing, the OutputOrigin, and the OutputExtent. Then, you supply a transform (either the ResliceAxes or the ResliceTransform) that will map these output grid points to the data coordinates of the input vtkImageData. Each output grid point is colored according to where it lands in the input data set, using whichever interpolation method has been selected. One way to think about it is that vtkImageReslice is analogous to 3D texture mapping. The input image is equivalent to the 3D texture. The output (i.e. the grid of output points) is the set of vertices that you want to color with the "texture". The transform maps from output data coords to input data coords, where "input data coords" are analogous to 3D texture coords, i.e. they tell you where you are in the input data volume.

@agirault: Merci David, that worked well! See code attached.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment