Suppr超能文献

用于多器官医学图像分割的高效生成对抗U型网络

Efficient Generative-Adversarial U-Net for Multi-Organ Medical Image Segmentation.

作者信息

Wang Haoran, Wu Gengshen, Liu Yi

机构信息

Faculty of Data Science, City University of Macau, Avenida Padre Tomás Pereira Taipa, Macao 999078, China.

School of Computer Science and Artificial Intelligence, Changzhou University, Changzhou 213000, China.

出版信息

J Imaging. 2025 Jan 12;11(1):19. doi: 10.3390/jimaging11010019.

Abstract

Manual labeling of lesions in medical image analysis presents a significant challenge due to its labor-intensive and inefficient nature, which ultimately strains essential medical resources and impedes the advancement of computer-aided diagnosis. This paper introduces a novel medical image-segmentation framework named Efficient Generative-Adversarial U-Net (EGAUNet), designed to facilitate rapid and accurate multi-organ labeling. To enhance the model's capability to comprehend spatial information, we propose the Global Spatial-Channel Attention Mechanism (GSCA). This mechanism enables the model to concentrate more effectively on regions of interest. Additionally, we have integrated Efficient Mapping Convolutional Blocks (EMCB) into the feature-learning process, allowing for the extraction of multi-scale spatial information and the adjustment of feature map channels through optimized weight values. Moreover, the proposed framework progressively enhances its performance by utilizing a generative-adversarial learning strategy, which contributes to improvements in segmentation accuracy. Consequently, EGAUNet demonstrates exemplary segmentation performance on public multi-organ datasets while maintaining high efficiency. For instance, in evaluations on the CHAOS T2SPIR dataset, EGAUNet achieves approximately 2% higher performance on the Jaccard metric, 1% higher on the Dice metric, and nearly 3% higher on the precision metric in comparison to advanced networks such as Swin-Unet and TransUnet.

摘要

在医学图像分析中,手动标注病变具有极大的挑战性,因为其劳动强度大且效率低下,这最终会消耗宝贵的医疗资源,并阻碍计算机辅助诊断的发展。本文介绍了一种名为高效生成对抗U型网络(EGAUNet)的新型医学图像分割框架,旨在实现快速、准确的多器官标注。为了提高模型理解空间信息的能力,我们提出了全局空间通道注意力机制(GSCA)。该机制使模型能够更有效地关注感兴趣的区域。此外,我们将高效映射卷积块(EMCB)集成到特征学习过程中,通过优化权重值来提取多尺度空间信息并调整特征图通道。此外,所提出的框架利用生成对抗学习策略逐步提高其性能,这有助于提高分割精度。因此,EGAUNet在公共多器官数据集上展现出了出色的分割性能,同时保持了高效性。例如,在CHAOS T2SPIR数据集的评估中,与Swin-Unet和TransUnet等先进网络相比,EGAUNet在杰卡德度量上的性能提高了约2%,在骰子度量上提高了1%,在精度度量上提高了近3%。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/eedb/11766170/2dfc064f15a9/jimaging-11-00019-g001.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验