• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

IRv2-Net:一种深度学习框架,用于通过集成 InceptionResNetV2 和 UNet 架构以及测试时增强技术来提高息肉分割性能。

IRv2-Net: A Deep Learning Framework for Enhanced Polyp Segmentation Performance Integrating InceptionResNetV2 and UNet Architecture with Test Time Augmentation Techniques.

机构信息

Department of Computer Science & Engineering, Rajshahi University of Engineering & Technology, Rajshahi 6204, Bangladesh.

Department of Electrical & Computer Engineering, Rajshahi University of Engineering & Technology, Rajshahi 6204, Bangladesh.

出版信息

Sensors (Basel). 2023 Sep 7;23(18):7724. doi: 10.3390/s23187724.

DOI:10.3390/s23187724
PMID:37765780
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC10534485/
Abstract

Colorectal polyps in the colon or rectum are precancerous growths that can lead to a more severe disease called colorectal cancer. Accurate segmentation of polyps using medical imaging data is essential for effective diagnosis. However, manual segmentation by endoscopists can be time-consuming, error-prone, and expensive, leading to a high rate of missed anomalies. To solve this problem, an automated diagnostic system based on deep learning algorithms is proposed to find polyps. The proposed IRv2-Net model is developed using the UNet architecture with a pre-trained InceptionResNetV2 encoder to extract most features from the input samples. The Test Time Augmentation (TTA) technique, which utilizes the characteristics of the original, horizontal, and vertical flips, is used to gain precise boundary information and multi-scale image features. The performance of numerous state-of-the-art (SOTA) models is compared using several metrics such as accuracy, Dice Similarity Coefficients (DSC), Intersection Over Union (IoU), precision, and recall. The proposed model is tested on the Kvasir-SEG and CVC-ClinicDB datasets, demonstrating superior performance in handling unseen real-time data. It achieves the highest area coverage in the area under the Receiver Operating Characteristic (ROC-AUC) and area under Precision-Recall (AUC-PR) curves. The model exhibits excellent qualitative testing outcomes across different types of polyps, including more oversized, smaller, over-saturated, sessile, or flat polyps, within the same dataset and across different datasets. Our approach can significantly minimize the number of missed rating difficulties. Lastly, a graphical interface is developed for producing the mask in real-time. The findings of this study have potential applications in clinical colonoscopy procedures and can serve based on further research and development.

摘要

结直肠息肉是一种癌前病变,可导致更严重的疾病,即结直肠癌。使用医学成像数据准确分割息肉对于有效的诊断至关重要。然而,内镜医师的手动分割可能既耗时、易错又昂贵,导致异常漏诊率高。为了解决这个问题,提出了一种基于深度学习算法的自动化诊断系统来发现息肉。所提出的 IRv2-Net 模型是使用 UNet 架构和预训练的 InceptionResNetV2 编码器开发的,从输入样本中提取大多数特征。使用测试时间增强 (TTA) 技术,利用原始、水平和垂直翻转的特征,获得精确的边界信息和多尺度图像特征。使用准确性、Dice 相似系数 (DSC)、交并比 (IoU)、精度和召回率等多个指标比较了许多最先进 (SOTA) 模型的性能。在所提出的模型在 Kvasir-SEG 和 CVC-ClinicDB 数据集上进行测试,在处理看不见的实时数据方面表现出卓越的性能。它在接收器操作特征 (ROC-AUC) 和精度-召回率 (AUC-PR) 曲线下的面积覆盖率方面达到最高。该模型在同一数据集和不同数据集内的不同类型息肉(包括更大、更小、过饱和、无蒂或扁平息肉)的定性测试结果均表现出色。我们的方法可以显著减少漏诊难度的数量。最后,开发了一个图形界面来实时生成掩模。本研究的结果具有在临床结肠镜检查程序中的潜在应用,并可以根据进一步的研究和开发来提供帮助。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0b67/10534485/974ac7c596bb/sensors-23-07724-g018.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0b67/10534485/e67d3d96bf27/sensors-23-07724-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0b67/10534485/d04a77edc090/sensors-23-07724-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0b67/10534485/9af90eb89efb/sensors-23-07724-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0b67/10534485/da9a25246046/sensors-23-07724-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0b67/10534485/ae426f3e0d92/sensors-23-07724-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0b67/10534485/7a99605529cf/sensors-23-07724-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0b67/10534485/2437656196cb/sensors-23-07724-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0b67/10534485/b49745f620ca/sensors-23-07724-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0b67/10534485/f678543fa891/sensors-23-07724-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0b67/10534485/0af3b555ef9e/sensors-23-07724-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0b67/10534485/855fac7477ed/sensors-23-07724-g011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0b67/10534485/6b9b4d898425/sensors-23-07724-g012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0b67/10534485/2ba17115784c/sensors-23-07724-g013a.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0b67/10534485/61f79e1efcdd/sensors-23-07724-g014.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0b67/10534485/18b1ac176f23/sensors-23-07724-g015.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0b67/10534485/513dfd56d28a/sensors-23-07724-g016.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0b67/10534485/29ce837bdf3b/sensors-23-07724-g017.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0b67/10534485/974ac7c596bb/sensors-23-07724-g018.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0b67/10534485/e67d3d96bf27/sensors-23-07724-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0b67/10534485/d04a77edc090/sensors-23-07724-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0b67/10534485/9af90eb89efb/sensors-23-07724-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0b67/10534485/da9a25246046/sensors-23-07724-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0b67/10534485/ae426f3e0d92/sensors-23-07724-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0b67/10534485/7a99605529cf/sensors-23-07724-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0b67/10534485/2437656196cb/sensors-23-07724-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0b67/10534485/b49745f620ca/sensors-23-07724-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0b67/10534485/f678543fa891/sensors-23-07724-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0b67/10534485/0af3b555ef9e/sensors-23-07724-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0b67/10534485/855fac7477ed/sensors-23-07724-g011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0b67/10534485/6b9b4d898425/sensors-23-07724-g012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0b67/10534485/2ba17115784c/sensors-23-07724-g013a.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0b67/10534485/61f79e1efcdd/sensors-23-07724-g014.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0b67/10534485/18b1ac176f23/sensors-23-07724-g015.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0b67/10534485/513dfd56d28a/sensors-23-07724-g016.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0b67/10534485/29ce837bdf3b/sensors-23-07724-g017.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0b67/10534485/974ac7c596bb/sensors-23-07724-g018.jpg

相似文献

1
IRv2-Net: A Deep Learning Framework for Enhanced Polyp Segmentation Performance Integrating InceptionResNetV2 and UNet Architecture with Test Time Augmentation Techniques.IRv2-Net:一种深度学习框架,用于通过集成 InceptionResNetV2 和 UNet 架构以及测试时增强技术来提高息肉分割性能。
Sensors (Basel). 2023 Sep 7;23(18):7724. doi: 10.3390/s23187724.
2
Focus U-Net: A novel dual attention-gated CNN for polyp segmentation during colonoscopy.聚焦 U-Net:一种新颖的双注意力门控 CNN,用于结肠镜检查中的息肉分割。
Comput Biol Med. 2021 Oct;137:104815. doi: 10.1016/j.compbiomed.2021.104815. Epub 2021 Sep 2.
3
Enhanced accuracy with Segmentation of Colorectal Polyp using NanoNetB, and Conditional Random Field Test-Time Augmentation.使用NanoNetB和条件随机场测试时间增强技术对结肠息肉进行分割可提高准确性。
Front Robot AI. 2024 Aug 9;11:1387491. doi: 10.3389/frobt.2024.1387491. eCollection 2024.
4
Using DUCK-Net for polyp image segmentation.使用 DUCK-Net 进行息肉图像分割。
Sci Rep. 2023 Jun 16;13(1):9803. doi: 10.1038/s41598-023-36940-5.
5
Multi-scale nested UNet with transformer for colorectal polyp segmentation.多尺度嵌套 UNet 与 Transformer 相结合的结直肠息肉分割方法。
J Appl Clin Med Phys. 2024 Jun;25(6):e14351. doi: 10.1002/acm2.14351. Epub 2024 Mar 29.
6
UViT-Seg: An Efficient ViT and U-Net-Based Framework for Accurate Colorectal Polyp Segmentation in Colonoscopy and WCE Images.UViT-Seg:一种基于 ViT 和 U-Net 的高效框架,用于在结肠镜和 WCE 图像中进行准确的结直肠息肉分割。
J Imaging Inform Med. 2024 Oct;37(5):2354-2374. doi: 10.1007/s10278-024-01124-8. Epub 2024 Apr 26.
7
HMA-Net: A deep U-shaped network combined with HarDNet and multi-attention mechanism for medical image segmentation.HMA-Net:一种结合 HarDNet 和多注意力机制的深度 U 形网络,用于医学图像分割。
Med Phys. 2023 Mar;50(3):1635-1646. doi: 10.1002/mp.16065. Epub 2022 Nov 3.
8
A Comprehensive Study on Colorectal Polyp Segmentation With ResUNet++, Conditional Random Field and Test-Time Augmentation.基于 ResUNet++、条件随机场和测试时增强的结直肠息肉分割的综合研究。
IEEE J Biomed Health Inform. 2021 Jun;25(6):2029-2040. doi: 10.1109/JBHI.2021.3049304. Epub 2021 Jun 3.
9
GAR-Net: Guided Attention Residual Network for Polyp Segmentation from Colonoscopy Video Frames.GAR-Net:用于结肠镜视频帧中息肉分割的引导注意力残差网络
Diagnostics (Basel). 2022 Dec 30;13(1):123. doi: 10.3390/diagnostics13010123.
10
Li-SegPNet: Encoder-Decoder Mode Lightweight Segmentation Network for Colorectal Polyps Analysis.Li-SegPNet:用于结直肠息肉分析的编码器-解码器模式轻量级分割网络
IEEE Trans Biomed Eng. 2023 Apr;70(4):1330-1339. doi: 10.1109/TBME.2022.3216269. Epub 2023 Mar 21.

引用本文的文献

1
Fine tuned CatBoost machine learning approach for early detection of cardiovascular disease through predictive modeling.通过预测建模对CatBoost机器学习方法进行微调以早期检测心血管疾病。
Sci Rep. 2025 Aug 25;15(1):31199. doi: 10.1038/s41598-025-13790-x.
2
Mamba-fusion for privacy-preserving disease prediction.用于隐私保护疾病预测的曼巴融合
Sci Rep. 2025 Jul 1;15(1):21819. doi: 10.1038/s41598-025-06306-0.
3
An efficient fine tuning strategy of segment anything model for polyp segmentation.一种用于息肉分割的高效的“分割一切”模型微调策略。

本文引用的文献

1
Using DUCK-Net for polyp image segmentation.使用 DUCK-Net 进行息肉图像分割。
Sci Rep. 2023 Jun 16;13(1):9803. doi: 10.1038/s41598-023-36940-5.
2
CaraNet: context axial reverse attention network for segmentation of small medical objects.CaraNet:用于小型医学对象分割的上下文轴向反向注意力网络
J Med Imaging (Bellingham). 2023 Jan;10(1):014005. doi: 10.1117/1.JMI.10.1.014005. Epub 2023 Feb 18.
3
Dual encoder-decoder-based deep polyp segmentation network for colonoscopy images.基于双编码器-解码器的结肠镜图像深度息肉分割网络。
Sci Rep. 2025 Apr 23;15(1):14088. doi: 10.1038/s41598-025-97802-w.
4
HDL-ACO hybrid deep learning and ant colony optimization for ocular optical coherence tomography image classification.用于眼部光学相干断层扫描图像分类的HDL-ACO混合深度学习与蚁群优化
Sci Rep. 2025 Feb 18;15(1):5888. doi: 10.1038/s41598-025-89961-7.
5
MugenNet: A Novel Combined Convolution Neural Network and Transformer Network with Application in Colonic Polyp Image Segmentation.MugenNet:一种新型的卷积神经网络与Transformer网络相结合的网络及其在结肠息肉图像分割中的应用
Sensors (Basel). 2024 Nov 23;24(23):7473. doi: 10.3390/s24237473.
Sci Rep. 2023 Jan 21;13(1):1183. doi: 10.1038/s41598-023-28530-2.
4
Automatic Extraction of Muscle Parameters with Attention UNet in Ultrasonography.基于注意力 U-Net 的超声自动提取肌肉参数。
Sensors (Basel). 2022 Jul 13;22(14):5230. doi: 10.3390/s22145230.
5
Clinical target segmentation using a novel deep neural network: double attention Res-U-Net.使用新型深度神经网络进行临床靶区分割:双注意力 Res-U-Net。
Sci Rep. 2022 Apr 25;12(1):6717. doi: 10.1038/s41598-022-10429-z.
6
AFP-Mask: Anchor-Free Polyp Instance Segmentation in Colonoscopy.AFP-Mask:结肠镜检查中的无锚定息肉实例分割。
IEEE J Biomed Health Inform. 2022 Jul;26(7):2995-3006. doi: 10.1109/JBHI.2022.3147686. Epub 2022 Jul 1.
7
Artificial intelligence-assisted colonoscopy: A review of current state of practice and research.人工智能辅助结肠镜检查:当前实践和研究的综述。
World J Gastroenterol. 2021 Dec 21;27(47):8103-8122. doi: 10.3748/wjg.v27.i47.8103.
8
Deep Learning for Caries Detection and Classification.用于龋齿检测与分类的深度学习
Diagnostics (Basel). 2021 Sep 13;11(9):1672. doi: 10.3390/diagnostics11091672.
9
Improving convolutional neural networks performance for image classification using test time augmentation: a case study using MURA dataset.使用测试时增强提高卷积神经网络在图像分类任务中的性能:以MURA数据集为例的研究
Health Inf Sci Syst. 2021 Jul 31;9(1):33. doi: 10.1007/s13755-021-00163-7. eCollection 2021 Dec.
10
Text Data Augmentation for Deep Learning.用于深度学习的文本数据增强
J Big Data. 2021;8(1):101. doi: 10.1186/s40537-021-00492-0. Epub 2021 Jul 19.