Suppr超能文献

通过增强特征对齐和交叉伪监督学习实现跨模态医学图像分割

Cross-Modality Medical Image Segmentation via Enhanced Feature Alignment and Cross Pseudo Supervision Learning.

作者信息

Yang Mingjing, Wu Zhicheng, Zheng Hanyu, Huang Liqin, Ding Wangbin, Pan Lin, Yin Lei

机构信息

College of Physics and Information Engineering, Fuzhou University, Fuzhou 350108, China.

School of Medical Imaging, Fujian Medical University, Fuzhou 350122, China.

出版信息

Diagnostics (Basel). 2024 Aug 12;14(16):1751. doi: 10.3390/diagnostics14161751.

Abstract

Given the diversity of medical images, traditional image segmentation models face the issue of domain shift. Unsupervised domain adaptation (UDA) methods have emerged as a pivotal strategy for cross modality analysis. These methods typically utilize generative adversarial networks (GANs) for both image-level and feature-level domain adaptation through the transformation and reconstruction of images, assuming the features between domains are well-aligned. However, this assumption falters with significant gaps between different medical image modalities, such as MRI and CT. These gaps hinder the effective training of segmentation networks with cross-modality images and can lead to misleading training guidance and instability. To address these challenges, this paper introduces a novel approach comprising a cross-modality feature alignment sub-network and a cross pseudo supervised dual-stream segmentation sub-network. These components work together to bridge domain discrepancies more effectively and ensure a stable training environment. The feature alignment sub-network is designed for the bidirectional alignment of features between the source and target domains, incorporating a self-attention module to aid in learning structurally consistent and relevant information. The segmentation sub-network leverages an enhanced cross-pseudo-supervised loss to harmonize the output of the two segmentation networks, assessing pseudo-distances between domains to improve the pseudo-label quality and thus enhancing the overall learning efficiency of the framework. This method's success is demonstrated by notable advancements in segmentation precision across target domains for abdomen and brain tasks.

摘要

鉴于医学图像的多样性,传统的图像分割模型面临着领域转移的问题。无监督域适应(UDA)方法已成为跨模态分析的关键策略。这些方法通常利用生成对抗网络(GAN)通过图像的变换和重建来进行图像级和特征级的域适应,假设不同域之间的特征是对齐的。然而,对于不同的医学图像模态,如MRI和CT,这种假设并不成立,因为它们之间存在显著差异。这些差异阻碍了使用跨模态图像对分割网络进行有效训练,并可能导致误导性的训练指导和不稳定性。为了应对这些挑战,本文提出了一种新颖的方法,该方法包括一个跨模态特征对齐子网络和一个交叉伪监督双流分割子网络。这些组件共同作用,更有效地弥合域差异,并确保稳定的训练环境。特征对齐子网络旨在对源域和目标域之间的特征进行双向对齐,并结合自注意力模块来帮助学习结构一致且相关的信息。分割子网络利用增强的交叉伪监督损失来协调两个分割网络的输出,评估域之间的伪距离以提高伪标签质量,从而提高框架的整体学习效率。通过在腹部和脑部任务的目标域中分割精度的显著提高,证明了该方法的成功。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8d8c/11353479/44d8af6d309c/diagnostics-14-01751-g001.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验