Department of Medical Physics and Engineering, School of Medicine, Shiraz University of Medical Sciences, Shiraz, Iran.
Research Center for Neuromodulation and Pain, Shiraz University of Medical Sciences, Shiraz, Iran.
J Appl Clin Med Phys. 2023 Nov;24(11):e14177. doi: 10.1002/acm2.14177. Epub 2023 Oct 12.
Multimodal image registration is a key for many clinical image-guided interventions. However, it is a challenging task because of complicated and unknown relationships between different modalities. Currently, deep supervised learning is the state-of-theart method at which the registration is conducted in end-to-end manner and one-shot. Therefore, a huge ground-truth data is required to improve the results of deep neural networks for registration. Moreover, supervised methods may yield models that bias towards annotated structures. Here, to deal with above challenges, an alternative approach is using unsupervised learning models. In this study, we have designed a novel deep unsupervised Convolutional Neural Network (CNN)-based model based on computer tomography/magnetic resonance (CT/MR) co-registration of brain images in an affine manner. For this purpose, we created a dataset consisting of 1100 pairs of CT/MR slices from the brain of 110 neuropsychic patients with/without tumor. At the next step, 12 landmarks were selected by a well-experienced radiologist and annotated on each slice resulting in the computation of series of metrics evaluation, target registration error (TRE), Dice similarity, Hausdorff, and Jaccard coefficients. The proposed method could register the multimodal images with TRE 9.89, Dice similarity 0.79, Hausdorff 7.15, and Jaccard 0.75 that are appreciable for clinical applications. Moreover, the approach registered the images in an acceptable time 203 ms and can be appreciable for clinical usage due to the short registration time and high accuracy. Here, the results illustrated that our proposed method achieved competitive performance against other related approaches from both reasonable computation time and the metrics evaluation.
多模态图像配准是许多临床图像引导干预的关键。然而,由于不同模态之间存在复杂和未知的关系,因此这是一项具有挑战性的任务。目前,深度监督学习是最先进的方法,其中注册是端到端进行的,并且是一次性的。因此,需要大量的真实数据来提高用于注册的深度神经网络的结果。此外,监督方法可能会生成偏向注释结构的模型。在这里,为了应对上述挑战,我们采用了替代方法,即使用无监督学习模型。在这项研究中,我们设计了一种新颖的基于深度无监督卷积神经网络(CNN)的模型,该模型基于大脑的 CT/MR 图像的仿射配准。为此,我们创建了一个由 1100 对来自 110 名患有/无肿瘤的神经精神病患者的大脑的 CT/MR 切片组成的数据集。下一步,由一位经验丰富的放射科医生选择 12 个地标,并在每个切片上进行注释,从而计算出一系列指标评估,目标配准误差(TRE)、Dice 相似性、Hausdorff 和 Jaccard 系数。所提出的方法可以以 TRE 9.89、Dice 相似性 0.79、Hausdorff 7.15 和 Jaccard 0.75 的精度注册多模态图像,这对于临床应用是可以接受的。此外,该方法的注册时间为 203ms,可以接受,因为注册时间短,精度高。这里的结果表明,我们提出的方法在计算时间和指标评估方面都具有竞争力,与其他相关方法相比表现出色。