Xu Zhe, Luo Jie, Yan Jiangpeng, Pulya Ritvik, Li Xiu, Wells William, Jagadeesan Jayender
Shenzhen International Graduate School, Tsinghua University, China.
Brigham and Women's Hospital, Harvard Medical School, USA.
Med Image Comput Comput Assist Interv. 2020 Oct;12263:222-232. doi: 10.1007/978-3-030-59716-0_22. Epub 2020 Sep 29.
Deformable image registration between Computed Tomography (CT) images and Magnetic Resonance (MR) imaging is essential for many image-guided therapies. In this paper, we propose a novel translation-based unsupervised deformable image registration method. Distinct from other translation-based methods that attempt to convert the multimodal problem (e.g., CT-to-MR) into a unimodal problem (e.g., MR-to-MR) via image-to-image translation, our method leverages the deformation fields estimated from both: (i) the translated MR image and (ii) the original CT image in a dual-stream fashion, and automatically learns how to fuse them to achieve better registration performance. The multimodal registration network can be effectively trained by computationally efficient similarity metrics without any ground-truth deformation. Our method has been evaluated on two clinical datasets and demonstrates promising results compared to state-of-the-art traditional and learning-based methods.
计算机断层扫描(CT)图像与磁共振(MR)成像之间的可变形图像配准对于许多图像引导治疗至关重要。在本文中,我们提出了一种基于平移的新型无监督可变形图像配准方法。与其他基于平移的方法不同,其他方法试图通过图像到图像的转换将多模态问题(例如,CT到MR)转换为单模态问题(例如,MR到MR),我们的方法以双流方式利用从以下两者估计的变形场:(i)平移后的MR图像和(ii)原始CT图像,并自动学习如何融合它们以实现更好的配准性能。多模态配准网络可以通过计算效率高的相似性度量进行有效训练,而无需任何地面真值变形。我们的方法已经在两个临床数据集上进行了评估,与当前最先进的传统方法和基于学习的方法相比,取得了有前景的结果。