Edinburgh Imaging Facility QMRI, Centre for Cardiovascular Science, University of Edinburgh, Edinburgh EH16 4TJ, UK.
Faculty of Medicine, National Heart & Lung Institute, Imperial College London, London SW7 2BX, UK.
Sensors (Basel). 2022 Mar 9;22(6):2125. doi: 10.3390/s22062125.
Magnetic Resonance Imaging (MRI) typically recruits multiple sequences (defined here as "modalities"). As each modality is designed to offer different anatomical and functional clinical information, there are evident disparities in the imaging content across modalities. Inter- and intra-modality affine and non-rigid image registration is an essential medical image analysis process in clinical imaging, as for example before imaging biomarkers need to be derived and clinically evaluated across different MRI modalities, time phases and slices. Although commonly needed in real clinical scenarios, affine and non-rigid image registration is not extensively investigated using a single unsupervised model architecture. In our work, we present an unsupervised deep learning registration methodology that can accurately model affine and non-rigid transformations, simultaneously. Moreover, inverse-consistency is a fundamental inter-modality registration property that is not considered in deep learning registration algorithms. To address inverse consistency, our methodology performs bi-directional cross-modality image synthesis to learn modality-invariant latent representations, and involves two factorised transformation networks (one per each encoder-decoder channel) and an inverse-consistency loss to learn topology-preserving anatomical transformations. Overall, our model (named "FIRE") shows improved performances against the reference standard baseline method (i.e., Symmetric Normalization implemented using the ANTs toolbox) on multi-modality brain 2D and 3D MRI and intra-modality cardiac 4D MRI data experiments. We focus on explaining model-data components to enhance model explainability in medical image registration. On computational time experiments, we show that the FIRE model performs on a memory-saving mode, as it can inherently learn topology-preserving image registration directly in the training phase. We therefore demonstrate an efficient and versatile registration technique that can have merit in multi-modal image registrations in the clinical setting.
磁共振成像(MRI)通常会招募多个序列(这里定义为“模态”)。由于每种模态旨在提供不同的解剖和功能临床信息,因此模态之间的成像内容存在明显差异。模态间和模态内仿射和非刚性图像配准是临床成像中必不可少的医学图像分析过程,例如,在需要在不同的 MRI 模态、时间相位和切片上推导和临床评估成像生物标志物之前。尽管在实际临床情况下通常需要,但使用单一无监督模型架构并未广泛研究仿射和非刚性图像配准。在我们的工作中,我们提出了一种无监督深度学习配准方法,该方法可以同时准确地模拟仿射和非刚性变换。此外,逆一致性是深度学习配准算法中未考虑的基本模态间配准属性。为了解决逆一致性问题,我们的方法通过双向跨模态图像合成来学习模态不变的潜在表示,并涉及两个因式变换网络(每个编码器-解码器通道一个)和一个逆一致性损失,以学习保持拓扑的解剖变换。总的来说,我们的模型(名为“FIRE”)在多模态脑 2D 和 3D MRI 以及模态内心脏 4D MRI 数据实验中,与参考标准基线方法(即使用 ANTs 工具箱实现的对称归一化)相比,表现出了改进的性能。我们专注于解释模型数据组件,以增强医学图像配准中的模型可解释性。在计算时间实验中,我们表明 FIRE 模型在节省内存的模式下运行,因为它可以在训练阶段直接学习保持拓扑的图像配准。因此,我们展示了一种高效且通用的配准技术,它在临床环境中的多模态图像配准中可能具有优势。