Suppr超能文献

基于几何一致性对抗的无监督多模态医学图像配准方法。

Geometry-Consistent Adversarial Registration Model for Unsupervised Multi-Modal Medical Image Registration.

出版信息

IEEE J Biomed Health Inform. 2023 Jul;27(7):3455-3466. doi: 10.1109/JBHI.2023.3270199. Epub 2023 Jun 30.

Abstract

Deformable multi-modal medical image registration aligns the anatomical structures of different modalities to the same coordinate system through a spatial transformation. Due to the difficulties of collecting ground-truth registration labels, existing methods often adopt the unsupervised multi-modal image registration setting. However, it is hard to design satisfactory metrics to measure the similarity of multi-modal images, which heavily limits the multi-modal registration performance. Moreover, due to the contrast difference of the same organ in multi-modal images, it is difficult to extract and fuse the representations of different modal images. To address the above issues, we propose a novel unsupervised multi-modal adversarial registration framework that takes advantage of image-to-image translation to translate the medical image from one modality to another. In this way, we are able to use the well-defined uni-modal metrics to better train the models. Inside our framework, we propose two improvements to promote accurate registration. First, to avoid the translation network learning spatial deformation, we propose a geometry-consistent training scheme to encourage the translation network to learn the modality mapping solely. Second, we propose a novel semi-shared multi-scale registration network that extracts features of multi-modal images effectively and predicts multi-scale registration fields in an coarse-to-fine manner to accurately register the large deformation area. Extensive experiments on brain and pelvic datasets demonstrate the superiority of the proposed method over existing methods, revealing our framework has great potential in clinical application.

摘要

可变形多模态医学图像配准通过空间变换将不同模态的解剖结构对齐到同一坐标系中。由于难以收集配准标签的真实值,现有的方法通常采用无监督的多模态图像配准设置。然而,设计令人满意的度量标准来测量多模态图像的相似性非常困难,这极大地限制了多模态配准的性能。此外,由于多模态图像中相同器官的对比度差异,很难提取和融合不同模态图像的表示。为了解决上述问题,我们提出了一种新颖的无监督多模态对抗配准框架,该框架利用图像到图像的转换将医学图像从一种模态转换为另一种模态。通过这种方式,我们可以使用定义良好的单模态度量标准来更好地训练模型。在我们的框架中,我们提出了两项改进措施来促进准确的配准。首先,为了避免翻译网络学习空间变形,我们提出了一种几何一致的训练方案,鼓励翻译网络仅学习模态映射。其次,我们提出了一种新颖的半共享多尺度配准网络,该网络可以有效地提取多模态图像的特征,并以粗到精的方式预测多尺度配准场,从而准确地配准大变形区域。在脑和骨盆数据集上的广泛实验表明,该方法优于现有的方法,这表明我们的框架在临床应用中具有很大的潜力。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验