文献检索文档翻译深度研究
Suppr Zotero 插件Zotero 插件
邀请有礼套餐&价格历史记录

新学期,新优惠

限时优惠:9月1日-9月22日

30天高级会员仅需29元

1天体验卡首发特惠仅需5.99元

了解详情
不再提醒
插件&应用
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
高级版
套餐订阅购买积分包
AI 工具
文献检索文档翻译深度研究
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2025

使用TCFMA-Net增强结直肠息肉分割:一种基于Transformer的交叉特征和多注意力网络

Enhancing colorectal polyp segmentation with TCFMA-Net: A transformer-based cross feature and multi-attention network.

作者信息

Manan Malik Abdul, Feng Jinchao, Ahmed Shahzad, Raheem Abdul

机构信息

Beijing Key Laboratory of Computational Intelligence and Intelligent System, Faculty of Information Technology, Beijing University of Technology, Beijing 100124, China.

Beijing Key Laboratory of Computational Intelligence and Intelligent System, Faculty of Information Technology, Beijing University of Technology, Beijing 100124, China.

出版信息

Artif Intell Med. 2025 Sep;167:103167. doi: 10.1016/j.artmed.2025.103167. Epub 2025 May 22.


DOI:10.1016/j.artmed.2025.103167
PMID:40450966
Abstract

To enhance polyp segmentation in colonoscopy images for early detection and diagnosis of colorectal cancer. The study proposed the Transformer-based cross feature multi-attention network (TCFMA-Net) for polyp segmentation by addressing challenges such as varying polyp sizes and the problem of accurate boundaries. TCFMA-Net utilizes swin transformer-based encoders, a cross-feature enhancer network with multiple cross-feature enhancer blocks, and multi-attention modules integrated within and outside the decoder blocks. This enables comprehensive cross-feature fusion, preserving image clarity and facilitating the flow of information, allowing efficient processing of both low-level and high-level features. TCFMA-Net effectively captures the complexities of polyp size variations and boundaries issues and consistently outperforms existing methods on six benchmark datasets with confidence interval (CI), achieving a Dice score of 92.74 ± 0.10, (CI: 91.92, 94.04), 91.46 ± 0.14 (CI: 91.12, 92.72), and 87.34 ± 0.13, (CI: 86.19, 88.10) on the CVC-ClinicDB, Kvasir-SEG and BKAI-IGH datasets respectively, demonstrating its robustness in diverse polyp segmentation tasks. Generalizability tests also yielded Dice scores of 89.51 ± 0.10, (CI: 88.67, 89.71), 72.91 ± 0.09, (CI: 71.39, 74.14), and 65.83 ± 0.22, (CI: 65.47, 66.52) on the CVC-300, CVC-ColonDB, and Polypgen databases respectively. TCFMA-Net demonstrates superior performance in segmenting polyps across datasets, effectively handling variations in polyp characteristics and demonstrating robust generalization capabilities. This study presents a significant advancement in polyp segmentation methods, offering an accurate and reliable tool for colorectal cancer diagnosis.

摘要

为了在结肠镜检查图像中增强息肉分割以实现结直肠癌的早期检测和诊断。该研究提出了基于Transformer的交叉特征多注意力网络(TCFMA-Net)用于息肉分割,以应对息肉大小变化和精确边界问题等挑战。TCFMA-Net利用基于Swin Transformer的编码器、具有多个交叉特征增强块的交叉特征增强网络以及集成在解码器块内外的多注意力模块。这实现了全面的交叉特征融合,保持图像清晰度并促进信息流动,允许对低级和高级特征进行高效处理。TCFMA-Net有效地捕捉了息肉大小变化和边界问题的复杂性,并在六个基准数据集上始终优于现有方法,其置信区间(CI)在CVC-ClinicDB、Kvasir-SEG和BKAI-IGH数据集上分别达到了92.74±0.10(CI:91.92,94.04)、91.46±0.14(CI:91.12,92.72)和87.34±0.13(CI:86.19,88.10),证明了其在各种息肉分割任务中的稳健性。泛化测试在CVC-300、CVC-ColonDB和Polypgen数据库上的Dice分数分别为89.51±0.10(CI:88.67,89.71)、72.91±0.09(CI:71.39,74.14)和65.83±0.22(CI:65.47,66.52)。TCFMA-Net在跨数据集分割息肉方面表现出卓越性能,有效处理息肉特征的变化并展示出强大的泛化能力。这项研究在息肉分割方法上取得了重大进展,为结直肠癌诊断提供了一个准确可靠的工具。

相似文献

[1]
Enhancing colorectal polyp segmentation with TCFMA-Net: A transformer-based cross feature and multi-attention network.

Artif Intell Med. 2025-9

[2]
VMDU-net: a dual encoder multi-scale fusion network for polyp segmentation with Vision Mamba and Cross-Shape Transformer integration.

Front Artif Intell. 2025-6-18

[3]
UViT-Seg: An Efficient ViT and U-Net-Based Framework for Accurate Colorectal Polyp Segmentation in Colonoscopy and WCE Images.

J Imaging Inform Med. 2024-10

[4]
EPSegNet: Lightweight Semantic Recalibration and Assembly for Efficient Polyp Segmentation.

IEEE Trans Neural Netw Learn Syst. 2025-8

[5]
DCATNet: polyp segmentation with deformable convolution and contextual-aware attention network.

BMC Med Imaging. 2025-4-14

[6]
Multi-scale nested UNet with transformer for colorectal polyp segmentation.

J Appl Clin Med Phys. 2024-6

[7]
Multi-level channel-spatial attention and light-weight scale-fusion network (MCSLF-Net): multi-level channel-spatial attention and light-weight scale-fusion transformer for 3D brain tumor segmentation.

Quant Imaging Med Surg. 2025-7-1

[8]
Colorectal cancer detection with enhanced precision using a hybrid supervised and unsupervised learning approach.

Sci Rep. 2025-1-25

[9]
Three-stage polyp segmentation network based on reverse attention feature purification with Pyramid Vision Transformer.

Comput Biol Med. 2024-9

[10]
PolySegNet: improving polyp segmentation through swin transformer and vision transformer fusion.

Biomed Eng Lett. 2024-8-20

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

推荐工具

医学文档翻译智能文献检索