voxelmorph is a general purpose library for learning-based tools for alignment/registration, and more generally modelling with deformations.
We have several voxelmorph tutorials - the main VoxelMorph tutorial explains VoxelMorph and Learning-based Registration. - a tutorial on training vxm on OASIS data, which we processed and released for free for HyperMorph.
To use the VoxelMorph library, either clone this repository and install the requirements listed in setup.py
or install directly with pip.
pip install voxelmorph
See list of pre-trained models available here.
If you would like to train your own model, you will likely need to customize some of the data-loading code in voxelmorph/generators.py
for your own datasets and data formats. However, it is possible to run many of the example scripts out-of-the-box, assuming that you provide a list of filenames in the training dataset. Training data can be in the NIfTI, MGZ, or npz (numpy) format, and it's assumed that each npz file in your data list has a vol
parameter, which points to the image data to be registered, and an optional seg
variable, which points to a corresponding discrete segmentation (for semi-supervised learning). It's also assumed that the shape of all training image data is consistent, but this, of course, can be handled in a customized generator if desired.
For a given image list file /images/list.txt
and output directory /models/output
, the following script will train an image-to-image registration network (described in MICCAI 2018 by default) with an unsupervised loss. Model weights will be saved to a path specified by the --model-dir
flag.
./scripts/tf/train.py --img-list /images/list.txt --model-dir /models/output --gpu 0
The --img-prefix
and --img-suffix
flags can be used to provide a consistent prefix or suffix to each path specified in the image list. Image-to-atlas registration can be enabled by providing an atlas file, e.g. --atlas atlas.npz
. If you'd like to train using the original dense CVPR network (no diffeomorphism), use the --int-steps 0
flag to specify no flow integration steps. Use the --help
flag to inspect all of the command line options that can be used to fine-tune network architecture and training.
If you simply want to register two images, you can use the register.py
script with the desired model file. For example, if we have a model model.h5
trained to register a subject (moving) to an atlas (fixed), we could run:
./scripts/tf/register.py --moving moving.nii.gz --fixed atlas.nii.gz --moved warped.nii.gz --model model.h5 --gpu 0
This will save the moved image to warped.nii.gz
. To also save the predicted deformation field, use the --save-warp
flag. Both npz or nifty files can be used as input/output in this script.
To test the quality of a model by computing dice overlap between an atlas segmentation and warped test scan segmentations, run:
./scripts/tf/test.py --model model.h5 --atlas atlas.npz --scans scan01.npz scan02.npz scan03.npz --labels labels.npz
Just like for the training data, the atlas and test npz files include vol
and seg
parameters and the labels.npz
file contains a list of corresponding anatomical labels to include in the computed dice score.
For the CC loss function, we found a reg parameter of 1 to work best. For the MSE loss function, we found 0.01 to work best.
For our data, we found image_sigma=0.01
and prior_lambda=25
to work best.
In the original MICCAI code, the parameters were applied after the scaling of the velocity field. With the newest code, this has been "fixed", with different default parameters reflecting the change. We recommend running the updated code. However, if you'd like to run the very original MICCAI2018 mode, please use xy
indexing and use_miccai_int
network option, with MICCAI2018 parameters.
The spatial transform code, found at voxelmorph.layers.SpatialTransformer
, accepts N-dimensional affine and dense transforms, including linear and nearest neighbor interpolation options. Note that original development of VoxelMorph used xy
indexing, whereas we are now emphasizing ij
indexing.
For the MICCAI2018 version, we integrate the velocity field using voxelmorph.layers.VecInt
. By default we integrate using scaling and squaring, which we found efficient.
If you use voxelmorph or some part of the code, please cite (see bibtex):
HyperMorph, avoiding the need to tune registration hyperparameters:
HyperMorph: Amortized Hyperparameter Learning for Image Registration.
Andrew Hoopes, Malte Hoffmann, Bruce Fischl, John Guttag, Adrian V. Dalca
IPMI: Information Processing in Medical Imaging. 2021. eprint arxiv:2101.01035
SynthMorph, avoiding the need to have data at training (!):
SynthMorph: learning contrast-invariant registration without acquired images.
Malte Hoffmann, Benjamin Billot, Juan Eugenio Iglesias, Bruce Fischl, Adrian V. Dalca
IEEE TMI: Transactions on Medical Imaging. 2022. eprint arXiv:2004.10282
For the atlas formation model:
Learning Conditional Deformable Templates with Convolutional Networks
Adrian V. Dalca, Marianne Rakic, John Guttag, Mert R. Sabuncu
NeurIPS 2019. eprint arXiv:1908.02738
For the diffeomorphic or probabilistic model:
Unsupervised Learning of Probabilistic Diffeomorphic Registration for Images and Surfaces
Adrian V. Dalca, Guha Balakrishnan, John Guttag, Mert R. Sabuncu
MedIA: Medial Image Analysis. 2019. eprint arXiv:1903.03545
Unsupervised Learning for Fast Probabilistic Diffeomorphic Registration
Adrian V. Dalca, Guha Balakrishnan, John Guttag, Mert R. Sabuncu
MICCAI 2018. eprint arXiv:1805.04605
For the original CNN model, MSE, CC, or segmentation-based losses:
VoxelMorph: A Learning Framework for Deformable Medical Image Registration
Guha Balakrishnan, Amy Zhao, Mert R. Sabuncu, John Guttag, Adrian V. Dalca
IEEE TMI: Transactions on Medical Imaging. 2019.
eprint arXiv:1809.05231
An Unsupervised Learning Model for Deformable Medical Image Registration
Guha Balakrishnan, Amy Zhao, Mert R. Sabuncu, John Guttag, Adrian V. Dalca
CVPR 2018. eprint arXiv:1802.02604
master
branch is still in testing as we roll out a major refactoring of the library. legacy
branch. recon-all
steps up to skull stripping and affine normalization to Talairach space, and crop the images via ((48, 48), (31, 33), (3, 29))
. We encourage users to download and process their own data. See a list of medical imaging datasets here. Note that you likely do not need to perform all of the preprocessing steps, and indeed VoxelMorph has been used in other work with other data.
To experiment with this method, please use train_template.py
for unconditional templates and train_cond_template.py
for conditional templates, which use the same conventions as voxelmorph (please note that these files are less polished than the rest of the voxelmorph library).
We've also provided an unconditional atlas in data/generated_uncond_atlas.npz.npy
.
Models in h5 format weights are provided for unconditional atlas here, and conditional atlas here.
Explore the atlases interactively here with tipiX!
SynthMorph is a strategy for learning registration without acquired imaging data, producing powerful networks agnostic to contrast induced by MRI (eprint arXiv:2004.10282). For a video and a demo showcasing the steps of generating random label maps from noise distributions and using these to train a network, visit synthmorph.voxelmorph.net.
We provide model files for a "shapes" variant of SynthMorph, that we train using images synthesized from random shapes only, and a "brains" variant, that we train using images synthesized from brain label maps. We train the brains variant by optimizing a loss term that measures volume overlap of a selection of brain labels. For registration with either model, please use the register.py
script with the respective model weights.
Accurate registration requires the input images to be min-max normalized, such that voxel intensities range from 0 to 1, and to be resampled in the affine space of a reference image. The affine registration can be performed with a variety of packages, and we choose FreeSurfer. First, we skull-strip the images with SAMSEG, keeping brain labels only. Second, we run mri_robust_register:
mri_robust_register --mov in.nii.gz --dst out.nii.gz --lta transform.lta --satit --iscale
mri_robust_register --mov in.nii.gz --dst out.nii.gz --lta transform.lta --satit --iscale --ixform transform.lta --affine
where we replace --satit --iscale
with --cost NMI
for registration across MRI contrasts.
While we cannot release most of the data used in the voxelmorph papers as they prohibit redistribution, we thorough processed and re-released OASIS1 while developing hypermorph. We now include a quick vxm tutorial to train voxelmorph on neurite-oasis data.
For any problems or questions please open an issue for code problems/questions or start a discussion for general registration/voxelmorph question/discussion.
This fixes the issue https://github.com/voxelmorph/voxelmorph/issues/517 by simply updating the confusing comment.
https://github.com/voxelmorph/voxelmorph/blob/204b87fd6147ba6c7fed7e441b2f3e85ba3a6b74/voxelmorph/torch/losses.py#L21
The line linked above seems wrong. The code seems to assume that the Ii, J are of shape [batch_size, 1, *vol_shape]
instead of [batch_size, *vol_shape, nb_feats]
. (Indeed the shape of sum_filt
in line 71 is [1,1, win,..,win]
.)
Could this be verified, and if so, could the comment be corrected?
When I am using the voxelmorph module, the following error occurs:
nibabel.deprecator.ExpiredDeprecationError: get_data() is deprecated in favor of get_fdata(), which has a more predictable return type. To obtain get_data() behavior going forward, use numpy.asanyarray(img.dataobj).
Task (what are you trying to do/register?)
I am trying to prepare my data to be used as according to the README.md, but I am stuck in knowing how/where to crop exactly
What have you tried
By following the author's comments on this repository's issues, I have been able to register the freesurfer result to the Talarich space using talarich.xfm
. Now all that's left is the crop the images, but I can't figure out how.
for reference, the README.md states,
In particular, we perform FreeSurfer recon-all steps up to skull stripping and affine normalization to Talairach space, and crop the images via ((48, 48), (31, 33), (3, 29)).
I am confused as to what (48,48) and so on mean in this context. The information does not seem to be enough to make a plane (only three 2d coordinates). Or am I missing something?
It would be great if anyone could write a code that they used or more description about how the cropping was exactly done!
Thank you in advance for your help 👍
This is the updated voxelmorph tutorial . Can you please check it out. is it the correct way to find the dice coefficient for 2d and 3d images https://colab.research.google.com/drive/1WiqyF7dCdnNBIANEY80Pxw_mVz4fyV-S?usp=sharing
image-registration image-alignment machine-learning deep-learning probabilistic diffeomorphism unsupervised-learning optical-flow