Suppr超能文献

基于解缠表示的无监督可变形框架的跨模态图像配准。

A Disentangled Representations based Unsupervised Deformable Framework for Cross-modality Image Registration.

出版信息

Annu Int Conf IEEE Eng Med Biol Soc. 2021 Nov;2021:3531-3534. doi: 10.1109/EMBC46164.2021.9630778.

Abstract

Cross-modality magnetic resonance image (MRI) registration is a fundamental step in various MRI analysis tasks. However, it remains challenging due to the domain shift between different modalities. In this paper, we proposed a fully unsupervised deformable framework for cross-modality image registration through image disentangling. To be specific, MRIs of both modalities are decomposed into a shared domain-invariant content space and domain-specific style spaces via a multi-modal unsupervised image-to-image translation approach. An unsupervised deformable network is then built based on the assumption that intrinsic information in the content space is preserved among different modalities. In addition, we proposed a novel loss function consists of two metrics, with one defined in the original image space and the other in the content space. Validation experiments were performed on two datasets. Compared to two conventional state-of-the-art cross-modality registration methods, the proposed framework shows a superior registration performance.Clinical relevance-This work can serve as an auxiliary tool for cross-modality registration in clinical practice.

摘要

跨模态磁共振图像(MRI)配准是各种 MRI 分析任务的基本步骤。然而,由于不同模态之间存在域偏移,因此仍然具有挑战性。在本文中,我们通过图像解缠提出了一种完全无监督的可变形框架,用于跨模态图像配准。具体来说,通过多模态无监督图像到图像的转换方法,将两种模态的 MRI 分解为共享的域不变内容空间和特定于域的样式空间。然后,基于内容空间中的固有信息在不同模态之间保持不变的假设,构建了一个无监督的可变形网络。此外,我们提出了一种新的损失函数,该函数由两个度量标准组成,一个在原始图像空间中定义,另一个在内容空间中定义。在两个数据集上进行了验证实验。与两种传统的最先进的跨模态配准方法相比,所提出的框架显示出更好的配准性能。临床相关性-这项工作可以作为临床实践中跨模态配准的辅助工具。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验