Suppr超能文献

使用深度学习在结肠镜检查中进行实时息肉检测、定位和分割

Real-Time Polyp Detection, Localization and Segmentation in Colonoscopy Using Deep Learning.

作者信息

Jha Debesh, Ali Sharib, Tomar Nikhil Kumar, Johansen Havard D, Johansen Dag, Rittscher Jens, Riegler Michael A, Halvorsen Pal

机构信息

SimulaMet0167OsloNorway.

Department of Engineering ScienceBig Data Institute, University of OxfordOxfordOX3 7XFU.K.

出版信息

IEEE Access. 2021 Mar 4;9:40496-40510. doi: 10.1109/ACCESS.2021.3063716. eCollection 2021.

Abstract

Computer-aided detection, localisation, and segmentation methods can help improve colonoscopy procedures. Even though many methods have been built to tackle automatic detection and segmentation of polyps, benchmarking of state-of-the-art methods still remains an open problem. This is due to the increasing number of researched computer vision methods that can be applied to polyp datasets. Benchmarking of novel methods can provide a direction to the development of automated polyp detection and segmentation tasks. Furthermore, it ensures that the produced results in the community are reproducible and provide a fair comparison of developed methods. In this paper, we benchmark several recent state-of-the-art methods using Kvasir-SEG, an open-access dataset of colonoscopy images for polyp detection, localisation, and segmentation evaluating both method accuracy and speed. Whilst, most methods in literature have competitive performance over accuracy, we show that the proposed ColonSegNet achieved a better trade-off between an average precision of 0.8000 and mean IoU of 0.8100, and the fastest speed of 180 frames per second for the detection and localisation task. Likewise, the proposed ColonSegNet achieved a competitive dice coefficient of 0.8206 and the best average speed of 182.38 frames per second for the segmentation task. Our comprehensive comparison with various state-of-the-art methods reveals the importance of benchmarking the deep learning methods for automated real-time polyp identification and delineations that can potentially transform current clinical practices and minimise miss-detection rates.

摘要

计算机辅助检测、定位和分割方法有助于改进结肠镜检查程序。尽管已经构建了许多方法来处理息肉的自动检测和分割,但对最先进方法的基准测试仍然是一个未解决的问题。这是由于可应用于息肉数据集的计算机视觉研究方法数量不断增加。对新方法进行基准测试可以为自动息肉检测和分割任务的发展提供方向。此外,它确保了该领域产生的结果具有可重复性,并能对已开发的方法进行公平比较。在本文中,我们使用Kvasir-SEG对几种近期的最先进方法进行基准测试,Kvasir-SEG是一个用于息肉检测、定位和分割的结肠镜检查图像开放获取数据集,同时评估方法的准确性和速度。虽然文献中的大多数方法在准确性方面具有竞争力,但我们表明,所提出的ColonSegNet在平均精度为0.8000和平均交并比为0.8100之间实现了更好的权衡,并且在检测和定位任务中达到了每秒180帧的最快速度。同样,所提出的ColonSegNet在分割任务中实现了0.8206的有竞争力的骰子系数和每秒182.38帧的最佳平均速度。我们与各种最先进方法的全面比较揭示了对深度学习方法进行基准测试对于自动实时息肉识别和描绘的重要性,这有可能改变当前的临床实践并将漏检率降至最低。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a2f5/7968127/660049c7e279/jha1-3063716.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验