Xu Zhe, Yan Jiangpeng, Luo Jie, Li Xiu, Jagadeesan Jayender
Shenzhen International Graduate School, Tsinghua University, Shenzhen, China.
Brigham and Women's Hospital, Harvard Medical School, Boston, USA.
Proc IEEE Int Conf Acoust Speech Signal Process. 2021 Jun;2021. doi: 10.1109/icassp39728.2021.9414320. Epub 2021 May 13.
Multimodal image registration (MIR) is a fundamental procedure in many image-guided therapies. Recently, unsupervised learning-based methods have demonstrated promising performance over accuracy and efficiency in deformable image registration. However, the estimated deformation fields of the existing methods fully rely on the to-be-registered image pair. It is difficult for the networks to be aware of the mismatched boundaries, resulting in unsatisfactory organ boundary alignment. In this paper, we propose a novel multimodal registration framework, which elegantly leverages the deformation fields estimated from both: (i) the original to-be-registered image pair, (ii) their corresponding gradient intensity maps, and adaptively fuses them with the proposed gated fusion module. With the help of auxiliary gradient-space guidance, the network can concentrate more on the spatial relationship of the organ boundary. Experimental results on two clinically acquired CT-MRI datasets demonstrate the effectiveness of our proposed approach.
多模态图像配准(MIR)是许多图像引导治疗中的一个基本过程。最近,基于无监督学习的方法在可变形图像配准的准确性和效率方面表现出了令人期待的性能。然而,现有方法估计的变形场完全依赖于待配准的图像对。网络很难察觉到不匹配的边界,导致器官边界对齐效果不理想。在本文中,我们提出了一种新颖的多模态配准框架,该框架巧妙地利用了从以下两方面估计的变形场:(i)原始待配准图像对;(ii)它们相应的梯度强度图,并通过提出的门控融合模块对其进行自适应融合。在辅助梯度空间引导的帮助下,网络可以更多地关注器官边界的空间关系。在两个临床采集的CT-MRI数据集上的实验结果证明了我们提出的方法的有效性。