Annu Int Conf IEEE Eng Med Biol Soc. 2021 Nov;2021:2610-2613. doi: 10.1109/EMBC46164.2021.9630617.
Multi-modality magnetic resonance image (MRI) registration is an essential step in various MRI analysis tasks. However, it is challenging to have all required modalities in clinical practice, and thus the application of multi-modality registration is limited. This paper tackles such problem by proposing a novel unsupervised deep learning based multi-modality large deformation diffeomorphic metric mapping (LDDMM) framework which is capable of performing multi-modality registration only using single-modality MRIs. Specifically, an unsupervised image-to-image translation model is trained and used to synthesize the missing modality MRIs from the available ones. Multi-modality LDDMM is then performed in a multi-channel manner. Experimental results obtained on one publicly- accessible datasets confirm the superior performance of the proposed approach.Clinical relevance-This work provides a tool for multi-modality MRI registration with solely single-modality images, which addresses the very common issue of missing modalities in clinical practice.
多模态磁共振图像(MRI)配准是各种 MRI 分析任务的重要步骤。然而,在临床实践中获得所有必需的模态是具有挑战性的,因此多模态配准的应用受到限制。本文通过提出一种新的基于无监督深度学习的多模态大变形仿射度量映射(LDDMM)框架来解决这个问题,该框架仅使用单模态 MRI 即可执行多模态配准。具体来说,训练并使用无监督的图像到图像翻译模型从可用的模态中合成缺失模态的 MRI。然后以多通道的方式执行多模态 LDDMM。在一个公开可访问的数据集上获得的实验结果证实了所提出方法的优越性能。临床相关性-这项工作为仅使用单模态图像进行多模态 MRI 配准提供了一种工具,解决了临床实践中非常常见的模态缺失问题。