Suppr超能文献

PolySegNet:通过Swin Transformer和视觉Transformer融合改进息肉分割

PolySegNet: improving polyp segmentation through swin transformer and vision transformer fusion.

作者信息

Lijin P, Ullah Mohib, Vats Anuja, Cheikh Faouzi Alaya, Santhosh Kumar G, Nair Madhu S

机构信息

Artificial Intelligence and Computer Vision Lab, Department of Computer Science, Cochin University of Science and Technology, Kochi, Kerala 682022 India.

Norwegian University of Science and Technology, Teknologivegen 22, 2815 Gjøvik, Norway.

出版信息

Biomed Eng Lett. 2024 Aug 20;14(6):1421-1431. doi: 10.1007/s13534-024-00415-x. eCollection 2024 Nov.

Abstract

Colorectal cancer ranks as the second most prevalent cancer worldwide, with a high mortality rate. Colonoscopy stands as the preferred procedure for diagnosing colorectal cancer. Detecting polyps at an early stage is critical for effective prevention and diagnosis. However, challenges in colonoscopic procedures often lead medical practitioners to seek support from alternative techniques for timely polyp identification. Polyp segmentation emerges as a promising approach to identify polyps in colonoscopy images. In this paper, we propose an advanced method, PolySegNet, that leverages both Vision Transformer and Swin Transformer, coupled with a Convolutional Neural Network (CNN) decoder. The fusion of these models facilitates a comprehensive analysis of various modules in our proposed architecture.To assess the performance of PolySegNet, we evaluate it on three colonoscopy datasets, a combined dataset, and their augmented versions. The experimental results demonstrate that PolySegNet achieves competitive results in terms of polyp segmentation accuracy and efficacy, achieving a mean Dice score of 0.92 and a mean Intersection over Union (IoU) of 0.86. These metrics highlight the superior performance of PolySegNet in accurately delineating polyp boundaries compared to existing methods. PolySegNet has shown great promise in accurately and efficiently segmenting polyps in medical images. The proposed method could be the foundation for a new class of transformer-based segmentation models in medical image analysis.

摘要

结直肠癌是全球第二大常见癌症,死亡率很高。结肠镜检查是诊断结直肠癌的首选方法。早期检测息肉对于有效预防和诊断至关重要。然而,结肠镜检查过程中的挑战常常促使医生寻求替代技术的支持,以便及时识别息肉。息肉分割成为一种在结肠镜图像中识别息肉的有前景的方法。在本文中,我们提出了一种先进的方法PolySegNet,它利用视觉Transformer和Swin Transformer,并结合卷积神经网络(CNN)解码器。这些模型的融合有助于对我们提出的架构中的各个模块进行全面分析。为了评估PolySegNet的性能,我们在三个结肠镜检查数据集、一个组合数据集及其增强版本上对其进行评估。实验结果表明,PolySegNet在息肉分割的准确性和有效性方面取得了有竞争力的结果,平均Dice分数为0.92,平均交并比(IoU)为0.86。这些指标突出了PolySegNet与现有方法相比在准确描绘息肉边界方面的卓越性能。PolySegNet在准确高效地分割医学图像中的息肉方面显示出了巨大的潜力。所提出的方法可能成为医学图像分析中一类基于Transformer的新分割模型的基础。

相似文献

1
PolySegNet: improving polyp segmentation through swin transformer and vision transformer fusion.
Biomed Eng Lett. 2024 Aug 20;14(6):1421-1431. doi: 10.1007/s13534-024-00415-x. eCollection 2024 Nov.
3
VMDU-net: a dual encoder multi-scale fusion network for polyp segmentation with Vision Mamba and Cross-Shape Transformer integration.
Front Artif Intell. 2025 Jun 18;8:1557508. doi: 10.3389/frai.2025.1557508. eCollection 2025.
4
Enhancing colorectal polyp segmentation with TCFMA-Net: A transformer-based cross feature and multi-attention network.
Artif Intell Med. 2025 Sep;167:103167. doi: 10.1016/j.artmed.2025.103167. Epub 2025 May 22.
6
Leveraging a foundation model zoo for cell similarity search in oncological microscopy across devices.
Front Oncol. 2025 Jun 18;15:1480384. doi: 10.3389/fonc.2025.1480384. eCollection 2025.
8
Transformers for Neuroimage Segmentation: Scoping Review.
J Med Internet Res. 2025 Jan 29;27:e57723. doi: 10.2196/57723.
9
Cognitive decline assessment using semantic linguistic content and transformer deep learning architecture.
Int J Lang Commun Disord. 2024 May-Jun;59(3):1110-1127. doi: 10.1111/1460-6984.12973. Epub 2023 Nov 16.
10
EPSegNet: Lightweight Semantic Recalibration and Assembly for Efficient Polyp Segmentation.
IEEE Trans Neural Netw Learn Syst. 2025 Aug;36(8):13805-13817. doi: 10.1109/TNNLS.2025.3527557.

本文引用的文献

1
MetaFormer Baselines for Vision.
IEEE Trans Pattern Anal Mach Intell. 2023 Nov 1;PP. doi: 10.1109/TPAMI.2023.3329173.
2
Automatic Polyp Segmentation with Multiple Kernel Dilated Convolution Network.
Proc IEEE Int Symp Comput Based Med Syst. 2022 Jul;2022:317-322. doi: 10.1109/CBMS55023.2022.00063. Epub 2022 Aug 31.
3
Dual encoder-decoder-based deep polyp segmentation network for colonoscopy images.
Sci Rep. 2023 Jan 21;13(1):1183. doi: 10.1038/s41598-023-28530-2.
4
Efficient attention-based deep encoder and decoder for automatic crack segmentation.
Struct Health Monit. 2022 Sep;21(5):2190-2205. doi: 10.1177/14759217211053776. Epub 2021 Dec 19.
5
Attention based multi-scale parallel network for polyp segmentation.
Comput Biol Med. 2022 Jul;146:105476. doi: 10.1016/j.compbiomed.2022.105476. Epub 2022 Apr 25.
6
FANet: A Feedback Attention Network for Improved Biomedical Image Segmentation.
IEEE Trans Neural Netw Learn Syst. 2023 Nov;34(11):9375-9388. doi: 10.1109/TNNLS.2022.3159394. Epub 2023 Oct 27.
7
MSRF-Net: A Multi-Scale Residual Fusion Network for Biomedical Image Segmentation.
IEEE J Biomed Health Inform. 2022 May;26(5):2252-2263. doi: 10.1109/JBHI.2021.3138024. Epub 2022 May 5.
8
A Comprehensive Study on Colorectal Polyp Segmentation With ResUNet++, Conditional Random Field and Test-Time Augmentation.
IEEE J Biomed Health Inform. 2021 Jun;25(6):2029-2040. doi: 10.1109/JBHI.2021.3049304. Epub 2021 Jun 3.
9
UNet++: A Nested U-Net Architecture for Medical Image Segmentation.
Deep Learn Med Image Anal Multimodal Learn Clin Decis Support (2018). 2018 Sep;11045:3-11. doi: 10.1007/978-3-030-00889-5_1. Epub 2018 Sep 20.
10
Squeeze-and-Excitation Networks.
IEEE Trans Pattern Anal Mach Intell. 2020 Aug;42(8):2011-2023. doi: 10.1109/TPAMI.2019.2913372. Epub 2019 Apr 29.

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验