ContraReg: Contrastive Learning of Multi-modality Unsupervised Deformable Image Registration
Modality (human–computer interaction)
Image registration
Regularization
Similarity (geometry)
Modalities
Feature Learning
DOI:
10.1007/978-3-031-16446-0_7
Publication Date:
2022-09-16T09:02:47Z
AUTHORS (6)
ABSTRACT
Establishing voxelwise semantic correspondence across distinct imaging modalities is a foundational yet formidable computer vision task. Current multi-modality registration techniques maximize hand-crafted inter-domain similarity functions, are limited in modeling nonlinear intensity-relationships and deformations, and may require significant re-engineering or underperform on new tasks, datasets, and domain pairs. This work presents ContraReg, an unsupervised contrastive representation learning approach to multi-modality deformable registration. By projecting learned multi-scale local patch features onto a jointly learned inter-domain embedding space, ContraReg obtains representations useful for non-rigid multi-modality alignment. Experimentally, ContraReg achieves accurate and robust results with smooth and invertible deformations across a series of baselines and ablations on a neonatal T1-T2 brain MRI registration task with all methods validated over a wide range of deformation regularization strengths.<br/>Accepted by MICCAI 2022. 13 pages, 6 figures, and 1 table<br/>
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES (40)
CITATIONS (16)
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....