Department of Medical Physics, Al-Neelain University, Khartoum, 11121, Sudan.
Department of Physics, College of Science, Princess Nourah bint Abdulrahman University, P.O. Box 84428, Riyadh, 11671, Saudi Arabia.
Radiat Oncol. 2024 May 21;19(1):61. doi: 10.1186/s13014-024-02452-3.
Accurate deformable registration of magnetic resonance imaging (MRI) scans containing pathologies is challenging due to changes in tissue appearance. In this paper, we developed a novel automated three-dimensional (3D) convolutional U-Net based deformable image registration (ConvUNet-DIR) method using unsupervised learning to establish correspondence between baseline pre-operative and follow-up MRI scans of patients with brain glioma.
This study involved multi-parametric brain MRI scans (T1, T1-contrast enhanced, T2, FLAIR) acquired at pre-operative and follow-up time for 160 patients diagnosed with glioma, representing the BraTS-Reg 2022 challenge dataset. ConvUNet-DIR, a deep learning-based deformable registration workflow using 3D U-Net style architecture as a core, was developed to establish correspondence between the MRI scans. The workflow consists of three components: (1) the U-Net learns features from pairs of MRI scans and estimates a mapping between them, (2) the grid generator computes the sampling grid based on the derived transformation parameters, and (3) the spatial transformation layer generates a warped image by applying the sampling operation using interpolation. A similarity measure was used as a loss function for the network with a regularization parameter limiting the deformation. The model was trained via unsupervised learning using pairs of MRI scans on a training data set (n = 102) and validated on a validation data set (n = 26) to assess its generalizability. Its performance was evaluated on a test set (n = 32) by computing the Dice score and structural similarity index (SSIM) quantitative metrics. The model's performance also was compared with the baseline state-of-the-art VoxelMorph (VM1 and VM2) learning-based algorithms.
The ConvUNet-DIR model showed promising competency in performing accurate 3D deformable registration. It achieved a mean Dice score of 0.975 ± 0.003 and SSIM of 0.908 ± 0.011 on the test set (n = 32). Experimental results also demonstrated that ConvUNet-DIR outperformed the VoxelMorph algorithms concerning Dice (VM1: 0.969 ± 0.006 and VM2: 0.957 ± 0.008) and SSIM (VM1: 0.893 ± 0.012 and VM2: 0.857 ± 0.017) metrics. The time required to perform a registration for a pair of MRI scans is about 1 s on the CPU.
The developed deep learning-based model can perform an end-to-end deformable registration of a pair of 3D MRI scans for glioma patients without human intervention. The model could provide accurate, efficient, and robust deformable registration without needing pre-alignment and labeling. It outperformed the state-of-the-art VoxelMorph learning-based deformable registration algorithms and other supervised/unsupervised deep learning-based methods reported in the literature.
由于组织外观的变化,对包含病变的磁共振成像(MRI)扫描进行准确的可变形配准具有挑战性。在本文中,我们开发了一种新的基于三维(3D)卷积 U-Net 的自动可变形图像配准(ConvUNet-DIR)方法,该方法使用无监督学习来建立脑胶质瘤患者基线术前和随访 MRI 扫描之间的对应关系。
本研究涉及了 160 名被诊断为脑胶质瘤的患者的多参数脑 MRI 扫描(T1、T1 对比增强、T2、FLAIR),这些扫描分别在术前和随访时间采集,代表了 BraTS-Reg 2022 挑战赛数据集。ConvUNet-DIR 是一种基于深度学习的可变形注册工作流程,使用 3D U-Net 风格架构作为核心,用于建立 MRI 扫描之间的对应关系。该工作流程由三个部分组成:(1)U-Net 从成对的 MRI 扫描中学习特征,并估计它们之间的映射;(2)网格生成器根据推导的变换参数计算采样网格;(3)空间变换层通过应用基于插值的采样操作生成变形图像。相似性度量被用作网络的损失函数,具有限制变形的正则化参数。该模型通过在训练数据集(n=102)上使用成对的 MRI 扫描进行无监督学习进行训练,并在验证数据集(n=26)上进行验证,以评估其泛化能力。通过计算 Dice 分数和结构相似性指数(SSIM)定量指标,在测试数据集(n=32)上评估模型的性能。还将模型的性能与基线最先进的基于体素的算法(VoxelMorph,VM1 和 VM2)进行了比较。
ConvUNet-DIR 模型在执行准确的 3D 可变形配准方面表现出了有前景的能力。它在测试数据集(n=32)上实现了平均 Dice 分数为 0.975±0.003 和 SSIM 为 0.908±0.011。实验结果还表明,ConvUNet-DIR 在 Dice(VM1:0.969±0.006 和 VM2:0.957±0.008)和 SSIM(VM1:0.893±0.012 和 VM2:0.857±0.017)指标方面均优于 VoxelMorph 算法。完成一对 MRI 扫描配准所需的时间约为 1s。
所开发的基于深度学习的模型可以在没有人工干预的情况下对胶质瘤患者的一对 3D MRI 扫描进行端到端的可变形配准。该模型可以提供准确、高效和稳健的可变形配准,而无需预对齐和标记。它在性能上优于最先进的基于体素的变形配准算法(VoxelMorph)以及文献中报道的其他监督/无监督的深度学习方法。