Dey Neel, Schlemper Jo, Mohseni Salehi Seyed Sadegh, Zhou Bo, Gerig Guido, Sofka Michal
Department of Computer Science & Engineering, New York University, Brooklyn, NY, USA.
Hyperfine Research, Guilford, CT, USA.
Med Image Comput Comput Assist Interv. 2022 Sep;13436:66-77. doi: 10.1007/978-3-031-16446-0_7. Epub 2022 Sep 17.
Establishing voxelwise semantic correspondence across distinct imaging modalities is a foundational yet formidable computer vision task. Current multi-modality registration techniques maximize hand-crafted inter-domain similarity functions, are limited in modeling nonlinear intensity-relationships and deformations, and may require significant re-engineering or underperform on new tasks, datasets, and domain pairs. This work presents ContraReg, an unsupervised contrastive representation learning approach to multi-modality deformable registration. By projecting learned multi-scale local patch features onto a jointly learned inter-domain embedding space, ContraReg obtains representations useful for non-rigid multi-modality alignment. Experimentally, ContraReg achieves accurate and robust results with smooth and invertible deformations across a series of baselines and ablations on a neonatal T1-T2 brain MRI registration task with all methods validated over a wide range of deformation regularization strengths.
在不同成像模态之间建立体素级语义对应是一项基础但极具挑战性的计算机视觉任务。当前的多模态配准技术最大化手工设计的域间相似性函数,在对非线性强度关系和变形进行建模方面存在局限性,并且可能需要大量重新设计,或者在新任务、数据集和域对中表现不佳。这项工作提出了ContraReg,一种用于多模态可变形配准的无监督对比表示学习方法。通过将学习到的多尺度局部补丁特征投影到联合学习的域间嵌入空间中,ContraReg获得了对非刚性多模态对齐有用的表示。实验表明,在新生儿T1-T2脑MRI配准任务中,ContraReg在一系列基线和消融实验中通过平滑且可逆的变形实现了准确且稳健的结果,所有方法都在广泛的变形正则化强度范围内得到了验证。