CoCalc provides the best real-time collaborative environment for Jupyter Notebooks, LaTeX documents, and SageMath, scalable from individual users to large groups and classes!
CoCalc provides the best real-time collaborative environment for Jupyter Notebooks, LaTeX documents, and SageMath, scalable from individual users to large groups and classes!
Path: blob/main/seminar2/seminar2_2_transform.ipynb
Views: 63
Transformations
Before you can start with a second level analysis you are facing the problem that all your output from the first level analysis are still in their subject specific subject-space. Because of the huge differences in brain size and cortical structure, it is very important to transform the data of each subject from its individual subject-space into a common standardized reference-space. This process of transformation is what we call normalization and it consists of a rigid body transformation (translations and rotations) as well as of a affine transformation (zooms and shears). The most common template that subject data is normalized to is the MNI template.
MNI Space and templates
The Montreal Neurological Institute (MNI) has published several “template brains,” which are generic brain shapes created by averaging together hundreds of individual anatomical scans. There are linear and non linear templates.
You can think of the image affine as a combination of a series of transformations to go from voxel coordinates to mm coordinates in terms of the magnet isocenter. Here is the EPI affine broken down into a series of transformations, with the results shown on the localizer image:
Affine transformations and rigid transformations
Why affine qform and sform tranformations are important? sform allows full 12 parameter affine transfrom to be encoded, however qform 9 parameter is limited to encoding translations, rotations (via a quaternion representation) and isotropic zooms.
Let's check how using qform we can translate images. First: Check images orientation between MNI template and out T1 image.
print("""Orintation comparison before transform:
T1 native image orientation:\n {0}
MNI Space template brain:\n {1} """.format(nii_file.affine, mni_space_template.affine))
Second: Check shapes of image
Third: use from_matvec method, to create 4x4 matrix from 3x3 vector to add translation of translate to dot with qform header
Finally, create new nifti with same array if intensities but with new qform header after translation with NIFTI1Image with new
Try: How about rotation?
Spatial Normalization Methods
All brains are different. The brain size of two subject can differ in size by up to 30%. There may also be substantial variation in the shapes of the brain. Normalization allows one to stretch, squeeze and warp each brain so that it is the same as some standard brain. Pros/cons:
results can be generalized to larger population;
results can be compared across studies;
results can be averaged across subjects;
potential errors (always make visual control of results);
reduces spatial resolution
ANTs Registration
Volume-based registration method, often used for corregistration between series. \
https://nipype.readthedocs.io/en/1.1.7/interfaces/generated/interfaces.ants/registration.html https://github.com/ANTsX/ANTs/wiki/Anatomy-of-an-antsRegistration-call
ANTs initialize + ANTs transform
Next methods of ants compute optimal transformation and produces ./mat transformation matrix, then calling registation() to apply it.
ANTs initialize affine between two spaces and outputs transformation matrix ANTs Transform takes affine matrix and output target space image
Apply a transform list to map an image from one domain to another.
Compare shape and orientation done by two methods
T1w and template alignment
Tips:
Run a bias correction before antsRegistration (i.e. N4). It helps getting better registration. \
Remove the skull before antsRegistration. If you have two brain-only images, you can be sure that surrounding tissues (i.e. the skull) will not take a toll on the registration accuracy. If you are using these skull-stripped versions, you can avoid using the mask, because you want the registration to use the "edge" features. If you use a mask, anything out of the mask will not be considered, the algorithm will try to match what's inside the brain, but not the edge of the brain itself (see Nick's explanation here). \
Never register a lesioned brain with a healthy brain without a proper mask. The algorithm will just pull the remaining parts of the lesioned brain to fill "the gap". Despite initial statements that you can register lesioned brains without the need to mask out the lesion, there is evidence showing that results without lesion masking are sub-optimal. If you really don't have the lesion mask, even a coarse and imprecise drawing of lesions helps (see Andersen 2010). \
Don't forget to read the parts of the manual (https://github.com/ANTsX/ANTs/wiki/Anatomy-of-an-antsRegistration-call) related to registration.
File formats convertation - volume-to-volume
You can convert file from preprocessing freesurfer to mni and t1 spaces, using antsTransform with fresurfer native command: mri_convert