Li Lei, Zhu Liumin, Wang Qifu, Dong Zhuoli, Liao Tianli, Li Peng
Key Laboratory of Grain Information Processing and Control, Henan University of Technology, Zhengzhou, 450001, China.
College of Information Science and Engineering, Henan University of Technology, Zhengzhou, 450001, China.
Interdiscip Sci. 2025 Apr 19. doi: 10.1007/s12539-025-00707-5.
Multi-modal medical image registration aims to align images from different modalities to establish spatial correspondences. Although deep learning-based methods have shown great potential, the lack of explicit reference relations makes unsupervised multi-modal registration still a challenging task. In this paper, we propose a novel unsupervised dual-stream multi-modal registration framework (DSMR), which combines a dual-stream registration network with a refinement module. Unlike existing methods that treat multi-modal registration as a uni-modal problem using a translation network, DSMR leverages the moving, fixed and translated images to generate two deformation fields. Specifically, we first utilize a translation network to convert a moving image into a translated image similar to a fixed image. Then, we employ the dual-stream registration network to compute two deformation fields respectively: the initial deformation field generated from the fixed image and the moving image, and the translated deformation field generated from the translated image and the fixed image. The translated deformation field acts as a pseudo-ground truth to refine the initial deformation field and mitigate issues such as artificial features introduced by translation. Finally, we use the refinement module to enhance the deformation field by integrating registration errors and contextual information. Extensive experimental results show that our DSMR achieves exceptional performance, demonstrating its strong generalization in learning the spatial relationships between images from unsupervised modalities. The source code of this work is available at https://github.com/raylihaut/DSMR .
多模态医学图像配准旨在对齐来自不同模态的图像以建立空间对应关系。尽管基于深度学习的方法已显示出巨大潜力,但缺乏明确的参考关系使得无监督多模态配准仍然是一项具有挑战性的任务。在本文中,我们提出了一种新颖的无监督双流多模态配准框架(DSMR),它将双流配准网络与一个细化模块相结合。与现有的使用平移网络将多模态配准视为单模态问题的方法不同,DSMR利用移动图像、固定图像和平移后的图像来生成两个变形场。具体来说,我们首先利用一个平移网络将一幅移动图像转换为一幅类似于固定图像的平移后的图像。然后,我们使用双流配准网络分别计算两个变形场:从固定图像和移动图像生成的初始变形场,以及从平移后的图像和固定图像生成的平移变形场。平移变形场充当伪真值以细化初始变形场并减轻诸如平移引入的人工特征等问题。最后,我们使用细化模块通过整合配准误差和上下文信息来增强变形场。大量实验结果表明,我们的DSMR取得了优异的性能,证明了其在学习无监督模态图像之间的空间关系方面具有强大的泛化能力。这项工作的源代码可在https://github.com/raylihaut/DSMR获取。