使用卷积神经网络对大肠息肉进行自动内镜检测与分类。

Automated endoscopic detection and classification of colorectal polyps using convolutional neural networks.

作者信息

Ozawa Tsuyoshi, Ishihara Soichiro, Fujishiro Mitsuhiro, Kumagai Youichi, Shichijo Satoki, Tada Tomohiro

机构信息

Department of Surgery, Teikyo University School of Medicine, 2-11-1 Kaga, Itabashi-ku, Tokyo 173-8606, Japan.

Tada Tomohiro institute of Gastroenterology and proctology, Saitama, Japan.

出版信息

Therap Adv Gastroenterol. 2020 Mar 20;13:1756284820910659. doi: 10.1177/1756284820910659. eCollection 2020.

Abstract

BACKGROUND

Recently the American Society for Gastrointestinal Endoscopy addressed the 'resect and discard' strategy, determining that accurate differentiation of colorectal polyps (CP) is necessary. Previous studies have suggested a promising application of artificial intelligence (AI), using deep learning in object recognition. Therefore, we aimed to construct an AI system that can accurately detect and classify CP using stored still images during colonoscopy.

METHODS

We used a deep convolutional neural network (CNN) architecture called Single Shot MultiBox Detector. We trained the CNN using 16,418 images from 4752 CPs and 4013 images of normal colorectums, and subsequently validated the performance of the trained CNN in 7077 colonoscopy images, including 1172 CP images from 309 various types of CP. Diagnostic speed and yields for the detection and classification of CP were evaluated as a measure of performance of the trained CNN.

RESULTS

The processing time of the CNN was 20 ms per frame. The trained CNN detected 1246 CP with a sensitivity of 92% and a positive predictive value (PPV) of 86%. The sensitivity and PPV were 90% and 83%, respectively, for the white light images, and 97% and 98% for the narrow band images. Among the correctly detected polyps, 83% of the CP were accurately classified through images. Furthermore, 97% of adenomas were precisely identified under the white light imaging.

CONCLUSIONS

Our CNN showed promise in being able to detect and classify CP through endoscopic images, highlighting its high potential for future application as an AI-based CP diagnosis support system for colonoscopy.

摘要

背景

最近,美国胃肠内镜学会讨论了“切除并丢弃”策略,确定对结直肠息肉(CP)进行准确鉴别是必要的。先前的研究表明,利用深度学习进行目标识别的人工智能(AI)具有广阔的应用前景。因此,我们旨在构建一个人工智能系统,该系统能够利用结肠镜检查期间存储的静态图像准确检测和分类CP。

方法

我们使用了一种名为单阶段多框检测器的深度卷积神经网络(CNN)架构。我们使用来自4752个CP的16418张图像和4013张正常结肠直肠图像对CNN进行训练,随后在7077张结肠镜检查图像中验证训练后的CNN的性能,其中包括来自309种不同类型CP的1172张CP图像。评估训练后的CNN检测和分类CP的诊断速度及准确率,以此作为其性能指标。

结果

CNN的处理时间为每帧20毫秒。训练后的CNN检测到1246个CP,灵敏度为92%,阳性预测值(PPV)为86%。白光图像的灵敏度和PPV分别为90%和83%,窄带图像的灵敏度和PPV分别为97%和98%。在正确检测到的息肉中,83%的CP通过图像被准确分类。此外,在白光成像下,97%的腺瘤被精确识别。

结论

我们的CNN在通过内镜图像检测和分类CP方面显示出前景,突出了其作为基于人工智能的结肠镜检查CP诊断支持系统未来应用的巨大潜力。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8e17/7092386/d23bcd14e0c1/10.1177_1756284820910659-fig1.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索