Suppr超能文献

一种新颖的 3D 无监督域自适应框架,用于跨模态医学图像分割。

A Novel 3D Unsupervised Domain Adaptation Framework for Cross-Modality Medical Image Segmentation.

出版信息

IEEE J Biomed Health Inform. 2022 Oct;26(10):4976-4986. doi: 10.1109/JBHI.2022.3162118. Epub 2022 Oct 4.

Abstract

We consider the problem of volumetric (3D) unsupervised domain adaptation (UDA) in cross-modality medical image segmentation, aiming to perform segmentation on the unannotated target domain (e.g. MRI) with the help of labeled source domain (e.g. CT). Previous UDA methods in medical image analysis usually suffer from two challenges: 1) they focus on processing and analyzing data at 2D level only, thus missing semantic information from the depth level; 2) one-to-one mapping is adopted during the style-transfer process, leading to insufficient alignment in the target domain. Different from the existing methods, in our work, we conduct a first of its kind investigation on multi-style image translation for complete image alignment to alleviate the domain shift problem, and also introduce 3D segmentation in domain adaptation tasks to maintain semantic consistency at the depth level. In particular, we develop an unsupervised domain adaptation framework incorporating a novel quartet self-attention module to efficiently enhance relationships between widely separated features in spatial regions on a higher dimension, leading to a substantial improvement in segmentation accuracy in the unlabeled target domain. In two challenging cross-modality tasks, specifically brain structures and multi-organ abdominal segmentation, our model is shown to outperform current state-of-the-art methods by a significant margin, demonstrating its potential as a benchmark resource for the biomedical and health informatics research community.

摘要

我们考虑了跨模态医学图像分割中的体积(3D)无监督域自适应(UDA)问题,旨在借助有标签的源域(例如 CT)在未标记的目标域(例如 MRI)上执行分割。医学图像分析中的先前 UDA 方法通常存在两个挑战:1)它们仅专注于处理和分析 2D 水平的数据,因此会丢失来自深度水平的语义信息;2)在风格转换过程中采用一对一映射,导致目标域中的对齐不足。与现有方法不同,在我们的工作中,我们首次针对完全图像对齐的多风格图像转换进行了研究,以减轻域转移问题,并且还在域自适应任务中引入了 3D 分割,以在深度级别保持语义一致性。具体来说,我们开发了一个无监督的域自适应框架,该框架结合了新颖的四元组自注意模块,以有效地增强高维空间区域中广泛分离的特征之间的关系,从而大大提高了未标记目标域中的分割准确性。在两个具有挑战性的跨模态任务中,即脑结构和多器官腹部分割,我们的模型明显优于当前的最先进方法,这表明它有潜力成为生物医学和健康信息学研究社区的基准资源。

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验