Suppr超能文献

StAC-DA:用于医学图像分割的具有图像和特征级自适应的结构感知跨模态域自适应框架。

StAC-DA: Structure aware cross-modality domain adaptation framework with image and feature-level adaptation for medical image segmentation.

作者信息

Baldeon-Calisto Maria, Lai-Yuen Susana K, Puente-Mejia Bernardo

机构信息

Departamento de Ingeniería Industrial, Colegio de Ciencias e Ingeniería, Instituto de Innovación en Productividad y Logística CATENA-USFQ, Universidad San Francisco de Quito, Quito, Ecuador.

Department of Industrial and Management Systems, University of South Florida, Tampa, FL, USA.

出版信息

Digit Health. 2024 Sep 2;10:20552076241277440. doi: 10.1177/20552076241277440. eCollection 2024 Jan-Dec.

Abstract

OBJECTIVE

Convolutional neural networks (CNNs) have achieved state-of-the-art results in various medical image segmentation tasks. However, CNNs often assume that the source and target dataset follow the same probability distribution and when this assumption is not satisfied their performance degrades significantly. This poses a limitation in medical image analysis, where including information from different imaging modalities can bring large clinical benefits. In this work, we present an unsupervised Structure Aware Cross-modality Domain Adaptation (StAC-DA) framework for medical image segmentation.

METHODS

StAC-DA implements an image- and feature-level adaptation in a sequential two-step approach. The first step performs an image-level alignment, where images from the source domain are translated to the target domain in pixel space by implementing a CycleGAN-based model. The latter model includes a structure-aware network that preserves the shape of the anatomical structure during translation. The second step consists of a feature-level alignment. A U-Net network with deep supervision is trained with the transformed source domain images and target domain images in an adversarial manner to produce probable segmentations for the target domain.

RESULTS

The framework is evaluated on bidirectional cardiac substructure segmentation. StAC-DA outperforms leading unsupervised domain adaptation approaches, being ranked first in the segmentation of the ascending aorta when adapting from Magnetic Resonance Imaging (MRI) to Computed Tomography (CT) domain and from CT to MRI domain.

CONCLUSIONS

The presented framework overcomes the limitations posed by differing distributions in training and testing datasets. Moreover, the experimental results highlight its potential to improve the accuracy of medical image segmentation across diverse imaging modalities.

摘要

目的

卷积神经网络(CNN)在各种医学图像分割任务中取得了最先进的成果。然而,CNN通常假设源数据集和目标数据集遵循相同的概率分布,当这一假设不成立时,其性能会显著下降。这在医学图像分析中构成了一个限制,因为纳入来自不同成像模态的信息可以带来巨大的临床益处。在这项工作中,我们提出了一种用于医学图像分割的无监督结构感知跨模态域适应(StAC-DA)框架。

方法

StAC-DA以顺序两步法实现图像级和特征级适应。第一步进行图像级对齐,通过基于CycleGAN的模型将源域图像在像素空间中转换到目标域。后一个模型包括一个结构感知网络,在转换过程中保留解剖结构的形状。第二步包括特征级对齐。一个带有深度监督的U-Net网络以对抗方式用转换后的源域图像和目标域图像进行训练,以生成目标域的可能分割结果。

结果

该框架在双向心脏子结构分割上进行了评估。StAC-DA优于领先的无监督域适应方法,在从磁共振成像(MRI)到计算机断层扫描(CT)域以及从CT到MRI域的升主动脉分割中排名第一。

结论

所提出的框架克服了训练和测试数据集中分布不同所带来的限制。此外,实验结果突出了其在提高跨多种成像模态的医学图像分割准确性方面的潜力。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6c88/11369866/b785620342d1/10.1177_20552076241277440-fig1.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验