Deng Liwei, Zou Yanchao, Yang Xin, Wang Jing, Huang Sijuan
Heilongjiang Provincial Key Laboratory of Complex Intelligent System and Integration, School of Automation, Harbin University of Science and Technology, Harbin, 150080 China.
Department of Radiation Oncology, Sun Yat-Sen University Cancer Center, State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Guangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and Therapy, Guangzhou, 510060 Guangdong China.
Biomed Eng Lett. 2024 Jan 10;14(3):497-509. doi: 10.1007/s13534-023-00344-1. eCollection 2024 May.
In recent years, deep learning has ushered in significant development in medical image registration, and the method of non-rigid registration using deep neural networks to generate a deformation field has higher accuracy. However, unlike monomodal medical image registration, multimodal medical image registration is a more complex and challenging task. This paper proposes a new linear-to-nonlinear framework (L2NLF) for multimodal medical image registration. The first linear stage is essentially image conversion, which can reduce the difference between two images without changing the authenticity of medical images, thus transforming multimodal registration into monomodal registration. The second nonlinear stage is essentially unsupervised deformable registration based on the deep neural network. In this paper, a brand-new registration network, CrossMorph, is designed, a deep neural network similar to the U-net structure. As the backbone of the encoder, the volume CrossFormer block can better extract local and global information. Booster promotes the reduction of more deep features and shallow features. The qualitative and quantitative experimental results on T1 and T2 data of 240 patients' brains show that L2NLF can achieve excellent registration effect in the image conversion part with very low computation, and it will not change the authenticity of the converted image at all. Compared with the current state-of-the-art registration method, CrossMorph can effectively reduce average surface distance, improve dice score, and improve the deformation field's smoothness. The proposed methods have potential value in clinical application.
近年来,深度学习在医学图像配准方面取得了显著进展,利用深度神经网络生成变形场的非刚性配准方法具有更高的精度。然而,与单模态医学图像配准不同,多模态医学图像配准是一项更复杂、更具挑战性的任务。本文提出了一种用于多模态医学图像配准的新型线性到非线性框架(L2NLF)。第一个线性阶段本质上是图像转换,它可以在不改变医学图像真实性的情况下减少两幅图像之间的差异,从而将多模态配准转化为单模态配准。第二个非线性阶段本质上是基于深度神经网络的无监督可变形配准。本文设计了一种全新的配准网络CrossMorph,它是一种类似于U-net结构的深度神经网络。作为编码器的主干,体积CrossFormer块可以更好地提取局部和全局信息。增强器促进更多深度特征和浅层特征的缩减。对240例患者大脑的T1和T2数据进行的定性和定量实验结果表明,L2NLF在图像转换部分能够以非常低的计算量实现出色的配准效果,并且完全不会改变转换后图像的真实性。与当前最先进的配准方法相比,CrossMorph可以有效降低平均表面距离,提高骰子系数,并改善变形场的平滑度。所提出的方法在临床应用中具有潜在价值。