Skip to content

Instantly share code, notes, and snippets.

@mpaquette
Created November 8, 2018 15:54
Show Gist options
  • Save mpaquette/5dda379e9e3e37e6a12b8776876c1e96 to your computer and use it in GitHub Desktop.
Save mpaquette/5dda379e9e3e37e6a12b8776876c1e96 to your computer and use it in GitHub Desktop.
Computes the voxelwise btable from initial bvecs/bvals and calc_grad_perc_dev output (from fullwarp output of gradunwrap.py)
#! /usr/bin/env python
# -*- coding: utf-8 -*-
# first draft of script to compute voxelwise bvec and bval after GNL
# work on the output of calc_grad_perc_dev
# calc_grad_perc_dev works on the fullwarp output of gradunwrap.py
import numpy as np
import nibabel as nib
import argparse
DESCRIPTION = """
Compute bvecs and bvals at each voxel according to gradient percent deviation map.
"""
def buildArgsParser():
p = argparse.ArgumentParser(description=DESCRIPTION)
p.add_argument('bvecs', action='store', type=str,
help='Path of the bvecs file')
p.add_argument('bvals', action='store', type=str,
help='Path of the bvals file')
p.add_argument('devX', action='store', type=str,
help='Path of X gradient deviation file')
p.add_argument('devY', action='store', type=str,
help='Path of Y gradient deviation file')
p.add_argument('devZ', action='store', type=str,
help='Path of Z gradient deviation file')
p.add_argument('outfile', action='store', type=str,
help='Path of the output btable file')
p.add_argument('--mask', dest='mask', action='store', type=str,
help='Path of the mask file. If none given, computes on the full volume.')
return p
def main():
# parse inpout
parser = buildArgsParser()
args = parser.parse_args()
bvecsfile = args.bvecs
bvalsfile = args.bvals
devXfile = args.devX
devYfile = args.devY
devZfile = args.devZ
outfile = args.outfile
maskfile = args.mask
# load data
bvecs = np.genfromtxt(bvecsfile)
if bvecs.shape[1] != 3:
bvecs = bvecs.T
bvals = np.genfromtxt(bvalsfile)
devX_img = nib.load(devXfile)
devY_img = nib.load(devYfile)
devZ_img = nib.load(devZfile)
devX = devX_img.get_data()
devY = devY_img.get_data()
devZ = devZ_img.get_data()
if maskfile is None:
mask = np.ones(devX.shape[:3])
print('No mask used, beware of inaccurate volume boundary.')
else:
mask = nib.load(maskfile).get_data()
# convert percentage to fraction
devX *= 0.01
devY *= 0.01
devZ *= 0.01
# make the grad non lin tensor
dev = np.concatenate((devX[...,None], devY[...,None], devZ[...,None]), axis=4)
# "q" gradient
bscaled_grad = (bvecs*np.sqrt(bvals)[:,None])
new_bscaled_grad = np.zeros(mask.shape+bvecs.shape)
for idx in np.ndindex(mask.shape):
if mask[idx]:
# distort bvecs
new_bscaled_grad[idx] = bscaled_grad.dot(dev[idx])
# renormalize gradient direction
new_b = np.linalg.norm(new_bscaled_grad, axis=4)
new_grad = new_bscaled_grad / new_b[...,None]
# nan removal (from the b division at b0)
new_grad[...,bvals<10,:] = 0
# new b is the norm squared of the distorded gradient
new_b = new_b**2
# make the (X,Y,Z,N,4) btable
btable = np.concatenate((new_grad, new_b[...,None]), axis=4)
# save
output_img = nib.Nifti1Image(btable, devX_img.affine, devX_img.header)
nib.save(output_img, outfile)
if __name__ == '__main__':
main()
@tetsuyacht
Copy link

Hello! I found a mistake when I tried to take advantage of your useful script. I guess that line #98 should be as follows to get correct new_grad and new_b.
new_bscaled_grad[idx] = bscaled_grad + bscaled_grad.dot(dev[idx])

@mpaquette
Copy link
Author

@tetsuyacht At the time this script was written, the type of inputs I was expecting and building the "dev" matrix from were such that "dev" would be an identity matrix when no correction were needed, not a null matrix as your correction requires. Are you using a different input than that of gradunwrap?

@tetsuyacht
Copy link

Thank you for replying back to me. I am using HCP gradunwarp 1.2.0 and FSL 6.0.4 for calc_grad_perc_dev. I am concerned whether my correction is correct or wrong for these versions.

@mpaquette
Copy link
Author

I can't remember on the top of my head the versions I was using, most likely the gradunwarp was the same because its never updated, however I was probably on FSL5, but I would be surprised if they touched such a simple script as this one. If you want to make sure, all you have to figure out is 1) whether or not the output are percentage points or fraction (in my case they were percentage points, this is why I multiply by 0.01 at line 81) and 2) whether or not they encode "multiplicatively" or "additively" the gradient non linearity (again, in my case it was multiplicative). The easiest way would be to have a look in a viewer at the 3 "gradient deviation" files which each have 3 volumes. You can ignore the off-diagonal terms for this check and only look at the first volume the devX, the second volume of the devY and the 3rd of the devZ (i.e. the would-be diagonal when stacked into 3x3 matrix form). The isocenter of the gradient system should lies roughly in the center of your data if positioning was done correctly. The center should have no non-linearity correction required, so once located, you can look at the numerical values to figure it out (i.e. in my case the isocenter diagonal terms were roughly [100 100 100], so percentage POINTS and multiplicative)

@tetsuyacht
Copy link

I am using grad_dev that DiffusionPreprocessingPipeline of HCP Pipelines outputs. This grad_dev has concatenated three files (devX, devY, and devZ) and been changed to fraction, so that I have changed to percentage and split the file again before inputting to your script. At least, in my case, the grad_dev file is likely to encode ADDITIVELY the nonlinearity because values around the isocenter, where the linearity is good, are approximately 0 and some values distant from the isocenter are negative.
I agree that you wonder why the format of grad_dev is different. As long as I read the source code of calc_grad_perc_dev, FSL 6 calculates differential of a warp field to correct for gradient distortion in each image dimension for each warp field direction, i.e. ADDITIVELY.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment