Suppr超能文献

使用深度学习模型在结肠镜检查期间自动检测解剖学标志

Automated Detection of Anatomical Landmarks During Colonoscopy Using a Deep Learning Model.

作者信息

Taghiakbari Mahsa, Hamidi Ghalehjegh Sina, Jehanno Emmanuel, Berthier Tess, di Jorio Lisa, Ghadakzadeh Saber, Barkun Alan, Takla Mark, Bouin Mickael, Deslandres Eric, Bouchard Simon, Sidani Sacha, Bengio Yoshua, von Renteln Md Daniel

机构信息

Faculty of Medicine, Department of Biomedical Sciences, University of Montreal, Montreal, Quebec, Canada.

Department of Medicine, Division of Gastroenterology, University of Montreal Hospital Research Center (CRCHUM), Montreal, Quebec, Canada.

出版信息

J Can Assoc Gastroenterol. 2023 May 2;6(4):145-151. doi: 10.1093/jcag/gwad017. eCollection 2023 Aug.

Abstract

BACKGROUND AND AIMS

Identification and photo-documentation of the ileocecal valve (ICV) and appendiceal orifice (AO) confirm completeness of colonoscopy examinations. We aimed to develop and test a deep convolutional neural network (DCNN) model that can automatically identify ICV and AO, and differentiate these landmarks from normal mucosa and colorectal polyps.

METHODS

We prospectively collected annotated full-length colonoscopy videos of 318 patients undergoing outpatient colonoscopies. We created three nonoverlapping training, validation, and test data sets with 25,444 unaltered frames extracted from the colonoscopy videos showing four landmarks/image classes (AO, ICV, normal mucosa, and polyps). A DCNN classification model was developed, validated, and tested in separate data sets of images containing the four different landmarks.

RESULTS

After training and validation, the DCNN model could identify both AO and ICV in 18 out of 21 patients (85.7%). The accuracy of the model for differentiating AO from normal mucosa, and ICV from normal mucosa were 86.4% (95% CI 84.1% to 88.5%), and 86.4% (95% CI 84.1% to 88.6%), respectively. Furthermore, the accuracy of the model for differentiating polyps from normal mucosa was 88.6% (95% CI 86.6% to 90.3%).

CONCLUSION

This model offers a novel tool to assist endoscopists with automated identification of AO and ICV during colonoscopy. The model can reliably distinguish these anatomical landmarks from normal mucosa and colorectal polyps. It can be implemented into automated colonoscopy report generation, photo-documentation, and quality auditing solutions to improve colonoscopy reporting quality.

摘要

背景与目的

识别并拍摄回盲瓣(ICV)和阑尾开口(AO)以确认结肠镜检查的完整性。我们旨在开发并测试一种深度卷积神经网络(DCNN)模型,该模型能够自动识别ICV和AO,并将这些标志与正常黏膜及大肠息肉区分开来。

方法

我们前瞻性收集了318例接受门诊结肠镜检查患者的带注释的全长结肠镜检查视频。我们从结肠镜检查视频中提取了25444帧未改变的帧,创建了三个不重叠的训练、验证和测试数据集,这些帧展示了四个标志/图像类别(AO、ICV、正常黏膜和息肉)。在包含四个不同标志的单独图像数据集中开发、验证并测试了一个DCNN分类模型。

结果

经过训练和验证后,DCNN模型能够在21例患者中的18例(85.7%)中识别出AO和ICV。该模型区分AO与正常黏膜以及ICV与正常黏膜的准确率分别为86.4%(95%可信区间84.1%至88.5%)和86.4%(95%可信区间84.1%至88.6%)。此外,该模型区分息肉与正常黏膜的准确率为88.6%(95%可信区间86.6%至90.3%)。

结论

该模型提供了一种新颖的工具,可协助内镜医师在结肠镜检查期间自动识别AO和ICV。该模型能够可靠地将这些解剖标志与正常黏膜及大肠息肉区分开来。它可应用于自动结肠镜检查报告生成、照片记录和质量审核解决方案中,以提高结肠镜检查报告质量。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3151/10395661/2153afa8d8db/gwad017_fig1.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验