Suppr超能文献

GCSA-SegFormer:基于Transformer的肝脏肿瘤病理图像分割方法

GCSA-SegFormer: Transformer-Based Segmentation for Liver Tumor Pathological Images.

作者信息

Wen Jingbin, Yang Sihua, Li Weiqi, Cheng Shuqun

机构信息

School of Biomedical Engineering, Southern Medical University, No. 1023-1063, Shatai South Road, Baiyun District, Guangzhou 510440, China.

School of Health Science and Engineering, University of Shanghai for Science and Technology, Shanghai 200082, China.

出版信息

Bioengineering (Basel). 2025 Jun 4;12(6):611. doi: 10.3390/bioengineering12060611.

Abstract

Pathological images are crucial for tumor diagnosis; however, due to their extremely high resolution, pathologists often spend considerable time and effort analyzing them. Moreover, diagnostic outcomes can be significantly influenced by subjective judgment. With the rapid advancement of artificial intelligence technologies, deep learning models offer new possibilities for pathological image diagnostics, enabling pathologists to diagnose more quickly, accurately, and reliably, thereby improving work efficiency. This paper proposes a novel Global Channel Spatial Attention (GCSA) module aimed at enhancing the representational capability of input feature maps. The module combines channel attention, channel shuffling, and spatial attention to capture global dependencies within feature maps. By integrating the GCSA module into the SegFormer architecture, the network, named GCSA-SegFormer, can more accurately capture global information and detailed features in complex scenarios. The proposed network was evaluated on a liver dataset and the publicly available ICIAR 2018 BACH dataset. On the liver dataset, the GCSA-SegFormer achieved a 1.12% increase in MIoU and a 1.15% increase in MPA compared to baseline models. On the BACH dataset, it improved MIoU by 1.26% and MPA by 0.39% compared to baseline models. Additionally, the performance metrics of this network were compared with seven different types of semantic segmentation, showing good results in all comparisons.

摘要

病理图像对于肿瘤诊断至关重要;然而,由于其极高的分辨率,病理学家在分析这些图像时往往要花费大量时间和精力。此外,诊断结果会受到主观判断的显著影响。随着人工智能技术的迅速发展,深度学习模型为病理图像诊断提供了新的可能性,使病理学家能够更快、更准确、更可靠地进行诊断,从而提高工作效率。本文提出了一种新颖的全局通道空间注意力(GCSA)模块,旨在增强输入特征图的表征能力。该模块结合了通道注意力、通道混洗和空间注意力,以捕捉特征图内的全局依赖性。通过将GCSA模块集成到SegFormer架构中,名为GCSA-SegFormer的网络能够在复杂场景中更准确地捕捉全局信息和细节特征。所提出的网络在肝脏数据集和公开可用的ICIAR 2018 BACH数据集上进行了评估。在肝脏数据集上,与基线模型相比,GCSA-SegFormer的平均交并比(MIoU)提高了1.12%,平均精度(MPA)提高了1.15%。在BACH数据集上,与基线模型相比,它的MIoU提高了1.26%,MPA提高了0.39%。此外,该网络的性能指标与七种不同类型的语义分割进行了比较,在所有比较中均显示出良好的结果。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4958/12189456/be6b1622e569/bioengineering-12-00611-g001.jpg

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验