Jiang Ping, Wu Sijia, Qin Wenjian, Xie Yaoqin
Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China.
University of Chinese Academy of Sciences, Beijing 100049, China.
Bioengineering (Basel). 2024 Dec 23;11(12):1304. doi: 10.3390/bioengineering11121304.
In recent years, image-guided brachytherapy for cervical cancer has become an important treatment method for patients with locally advanced cervical cancer, and multi-modality image registration technology is a key step in this system. However, due to the patient's own movement and other factors, the deformation between the different modalities of images is discontinuous, which brings great difficulties to the registration of pelvic computed tomography (CT/) and magnetic resonance (MR) images. In this paper, we propose a multimodality image registration network based on multistage transformation enhancement features (MTEF) to maintain the continuity of the deformation field. The model uses wavelet transform to extract different components of the image and performs fusion and enhancement processing as the input to the model. The model performs multiple registrations from local to global regions. Then, we propose a novel shared pyramid registration network that can accurately extract features from different modalities, optimizing the predicted deformation field through progressive refinement. In order to improve the registration performance, we also propose a deep learning similarity measurement method combined with bistructural morphology. On the basis of deep learning, bistructural morphology is added to the model to train the pelvic area registration evaluator, and the model can obtain parameters covering large deformation for loss function. The model was verified by the actual clinical data of cervical cancer patients. After a large number of experiments, our proposed model achieved the highest dice similarity coefficient (DSC) metric compared with the state-of-the-art registration methods. The DSC index of the MTEF algorithm is 5.64% higher than that of the TransMorph algorithm. It will effectively integrate multi-modal image information, improve the accuracy of tumor localization, and benefit more cervical cancer patients.
近年来,宫颈癌的图像引导近距离放射治疗已成为局部晚期宫颈癌患者的重要治疗方法,而多模态图像配准技术是该系统的关键步骤。然而,由于患者自身运动等因素,不同模态图像之间的变形是不连续的,这给盆腔计算机断层扫描(CT)/磁共振(MR)图像的配准带来了很大困难。在本文中,我们提出了一种基于多阶段变换增强特征(MTEF)的多模态图像配准网络,以保持变形场的连续性。该模型使用小波变换提取图像的不同分量,并进行融合和增强处理作为模型的输入。该模型从局部到全局区域进行多次配准。然后,我们提出了一种新颖的共享金字塔配准网络,它可以从不同模态中准确提取特征,通过渐进细化优化预测的变形场。为了提高配准性能,我们还提出了一种结合双结构形态学的深度学习相似性测量方法。在深度学习的基础上,将双结构形态学添加到模型中训练盆腔区域配准评估器,该模型可以获得覆盖大变形的参数用于损失函数。该模型通过宫颈癌患者的实际临床数据进行了验证。经过大量实验,与现有最先进的配准方法相比,我们提出的模型实现了最高的骰子相似系数(DSC)指标。MTEF算法的DSC指数比TransMorph算法高5.64%。它将有效地整合多模态图像信息,提高肿瘤定位的准确性,并使更多宫颈癌患者受益。