• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

基于深度学习的无线胶囊内镜实时器官定位与传输时间估计

Deep Learning-Based Real-Time Organ Localization and Transit Time Estimation in Wireless Capsule Endoscopy.

作者信息

Nam Seung-Joo, Moon Gwiseong, Park Jung-Hwan, Kim Yoon, Lim Yun Jeong, Choi Hyun-Soo

机构信息

Division of Gastroenterology and Hepatology, Department of Internal Medicine, Kangwon National University School of Medicine, Chuncheon 24341, Republic of Korea.

Ziovision Co., Ltd., Chuncheon 24341, Republic of Korea.

出版信息

Biomedicines. 2024 Jul 31;12(8):1704. doi: 10.3390/biomedicines12081704.

DOI:10.3390/biomedicines12081704
PMID:39200169
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC11351118/
Abstract

BACKGROUND

Wireless capsule endoscopy (WCE) has significantly advanced the diagnosis of gastrointestinal (GI) diseases by allowing for the non-invasive visualization of the entire small intestine. However, machine learning-based methods for organ classification in WCE often rely on color information, leading to decreased performance when obstacles such as food debris are present. This study proposes a novel model that integrates convolutional neural networks (CNNs) and long short-term memory (LSTM) networks to analyze multiple frames and incorporate temporal information, ensuring that it performs well even when visual information is limited.

METHODS

We collected data from 126 patients using PillCam™ SB3 (Medtronic, Minneapolis, MN, USA), which comprised 2,395,932 images. Our deep learning model was trained to identify organs (stomach, small intestine, and colon) using data from 44 training and 10 validation cases. We applied calibration using a Gaussian filter to enhance the accuracy of detecting organ boundaries. Additionally, we estimated the transit time of the capsule in the gastric and small intestine regions using a combination of a convolutional neural network (CNN) and a long short-term memory (LSTM) designed to be aware of the sequence information of continuous videos. Finally, we evaluated the model's performance using WCE videos from 72 patients.

RESULTS

Our model demonstrated high performance in organ classification, achieving an accuracy, sensitivity, and specificity of over 95% for each organ (stomach, small intestine, and colon), with an overall accuracy and F1-score of 97.1%. The Matthews Correlation Coefficient (MCC) and Geometric Mean (G-mean) were used to evaluate the model's performance on imbalanced datasets, achieving MCC values of 0.93 for the stomach, 0.91 for the small intestine, and 0.94 for the colon, and G-mean values of 0.96 for the stomach, 0.95 for the small intestine, and 0.97 for the colon. Regarding the estimation of gastric and small intestine transit times, the mean time differences between the model predictions and ground truth were 4.3 ± 9.7 min for the stomach and 24.7 ± 33.8 min for the small intestine. Notably, the model's predictions for gastric transit times were within 15 min of the ground truth for 95.8% of the test dataset (69 out of 72 cases). The proposed model shows overall superior performance compared to a model using only CNN.

CONCLUSIONS

The combination of CNN and LSTM proves to be both accurate and clinically effective for organ classification and transit time estimation in WCE. Our model's ability to integrate temporal information allows it to maintain high performance even in challenging conditions where color information alone is insufficient. Including MCC and G-mean metrics further validates the robustness of our approach in handling imbalanced datasets. These findings suggest that the proposed method can significantly improve the diagnostic accuracy and efficiency of WCE, making it a valuable tool in clinical practice for diagnosing and managing GI diseases.

摘要

背景

无线胶囊内镜(WCE)通过实现对整个小肠的非侵入性可视化,极大地推动了胃肠道(GI)疾病的诊断。然而,基于机器学习的WCE器官分类方法通常依赖颜色信息,当存在食物残渣等障碍物时,性能会下降。本研究提出了一种新颖的模型,该模型集成了卷积神经网络(CNN)和长短期记忆(LSTM)网络,以分析多个帧并纳入时间信息,确保即使在视觉信息有限的情况下也能表现良好。

方法

我们使用PillCam™ SB3(美敦力公司,明尼阿波利斯,明尼苏达州,美国)从126名患者收集数据,共2395932张图像。我们的深度学习模型使用来自44个训练病例和10个验证病例的数据进行训练,以识别器官(胃、小肠和结肠)。我们使用高斯滤波器进行校准,以提高检测器官边界的准确性。此外,我们使用一个旨在了解连续视频序列信息的卷积神经网络(CNN)和长短期记忆(LSTM)的组合,估计胶囊在胃和小肠区域的传输时间。最后,我们使用72名患者的WCE视频评估模型的性能。

结果

我们的模型在器官分类方面表现出高性能,每个器官(胃、小肠和结肠)的准确率、灵敏度和特异性均超过95%,总体准确率和F1分数为97.1%。马修斯相关系数(MCC)和几何均值(G-均值)用于评估模型在不平衡数据集上的性能,胃的MCC值为0.93,小肠为0.91,结肠为0.94,胃的G-均值为0.96,小肠为0.95,结肠为0.97。关于胃和小肠传输时间的估计,模型预测与地面真值之间的平均时间差,胃为4.3±9.7分钟,小肠为24.7±33.8分钟。值得注意的是,对于95.8%的测试数据集(72例中的69例),模型对胃传输时间的预测在地面真值的15分钟内。与仅使用CNN的模型相比,所提出的模型总体表现更优。

结论

CNN和LSTM的组合在WCE的器官分类和传输时间估计方面被证明既准确又具有临床有效性。我们的模型整合时间信息的能力使其即使在仅颜色信息不足的具有挑战性的条件下也能保持高性能。纳入MCC和G-均值指标进一步验证了我们的方法在处理不平衡数据集方面的稳健性。这些发现表明,所提出的方法可以显著提高WCE的诊断准确性和效率,使其成为临床实践中诊断和管理GI疾病的有价值工具。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1582/11351118/63e764c8a4f1/biomedicines-12-01704-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1582/11351118/773f3c01ba3a/biomedicines-12-01704-g0A1.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1582/11351118/12413a1b6bee/biomedicines-12-01704-g0A2.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1582/11351118/ca40672b39df/biomedicines-12-01704-g0A3.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1582/11351118/365d7b858c8c/biomedicines-12-01704-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1582/11351118/e5895b16b274/biomedicines-12-01704-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1582/11351118/3eba976f281e/biomedicines-12-01704-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1582/11351118/bfd5464248ca/biomedicines-12-01704-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1582/11351118/a604c1740f47/biomedicines-12-01704-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1582/11351118/9930e7068fa9/biomedicines-12-01704-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1582/11351118/00f356027fab/biomedicines-12-01704-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1582/11351118/63e764c8a4f1/biomedicines-12-01704-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1582/11351118/773f3c01ba3a/biomedicines-12-01704-g0A1.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1582/11351118/12413a1b6bee/biomedicines-12-01704-g0A2.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1582/11351118/ca40672b39df/biomedicines-12-01704-g0A3.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1582/11351118/365d7b858c8c/biomedicines-12-01704-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1582/11351118/e5895b16b274/biomedicines-12-01704-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1582/11351118/3eba976f281e/biomedicines-12-01704-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1582/11351118/bfd5464248ca/biomedicines-12-01704-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1582/11351118/a604c1740f47/biomedicines-12-01704-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1582/11351118/9930e7068fa9/biomedicines-12-01704-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1582/11351118/00f356027fab/biomedicines-12-01704-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1582/11351118/63e764c8a4f1/biomedicines-12-01704-g008.jpg

相似文献

1
Deep Learning-Based Real-Time Organ Localization and Transit Time Estimation in Wireless Capsule Endoscopy.基于深度学习的无线胶囊内镜实时器官定位与传输时间估计
Biomedicines. 2024 Jul 31;12(8):1704. doi: 10.3390/biomedicines12081704.
2
Wireless capsule endoscopy multiclass classification using three-dimensional deep convolutional neural network model.基于三维深度卷积神经网络模型的无线胶囊内镜多分类。
Biomed Eng Online. 2023 Dec 15;22(1):124. doi: 10.1186/s12938-023-01186-9.
3
Revealing the Boundaries of Selected Gastro-Intestinal (GI) Organs by Implementing CNNs in Endoscopic Capsule Images.通过在内窥镜胶囊图像中应用卷积神经网络揭示选定胃肠道(GI)器官的边界
Diagnostics (Basel). 2023 Feb 23;13(5):865. doi: 10.3390/diagnostics13050865.
4
Small Bowel Detection for Wireless Capsule Endoscopy Using Convolutional Neural Networks with Temporal Filtering.使用带时间滤波的卷积神经网络进行无线胶囊内镜小肠检测
Diagnostics (Basel). 2022 Jul 31;12(8):1858. doi: 10.3390/diagnostics12081858.
5
Deep learning-based prediction model for diagnosing gastrointestinal diseases using endoscopy images.基于深度学习的内镜图像胃肠道疾病诊断预测模型。
Int J Med Inform. 2023 Sep;177:105142. doi: 10.1016/j.ijmedinf.2023.105142. Epub 2023 Jul 5.
6
Deep Learning and Minimally Invasive Endoscopy: Automatic Classification of Pleomorphic Gastric Lesions in Capsule Endoscopy.深度学习与微创内窥镜:胶囊内镜中异型性胃病变的自动分类。
Clin Transl Gastroenterol. 2023 Oct 1;14(10):e00609. doi: 10.14309/ctg.0000000000000609.
7
Deep learning for registration of region of interest in consecutive wireless capsule endoscopy frames.基于深度学习的无线胶囊内窥镜连续帧中感兴趣区域的配准。
Comput Methods Programs Biomed. 2021 Sep;208:106189. doi: 10.1016/j.cmpb.2021.106189. Epub 2021 May 25.
8
Computer aided wireless capsule endoscopy video segmentation.计算机辅助无线胶囊内窥镜视频分割
Med Phys. 2015 Feb;42(2):645-52. doi: 10.1118/1.4905164.
9
Application of Convolutional Neural Networks for Automated Ulcer Detection in Wireless Capsule Endoscopy Images.卷积神经网络在无线胶囊内窥镜图像自动溃疡检测中的应用。
Sensors (Basel). 2019 Mar 13;19(6):1265. doi: 10.3390/s19061265.
10
Deep Convolutional Neural Network for Ulcer Recognition in Wireless Capsule Endoscopy: Experimental Feasibility and Optimization.无线胶囊内窥镜中溃疡识别的深度卷积神经网络:实验可行性与优化。
Comput Math Methods Med. 2019 Sep 18;2019:7546215. doi: 10.1155/2019/7546215. eCollection 2019.

引用本文的文献

1
Edge Artificial Intelligence Device in Real-Time Endoscopy for Classification of Gastric Neoplasms: Development and Validation Study.用于胃肿瘤分类的实时内镜边缘人工智能设备:开发与验证研究
Biomimetics (Basel). 2024 Dec 22;9(12):783. doi: 10.3390/biomimetics9120783.
2
Deep Learning Models for Anatomical Location Classification in Esophagogastroduodenoscopy Images and Videos: A Quantitative Evaluation with Clinical Data.用于食管胃十二指肠镜检查图像和视频中解剖位置分类的深度学习模型:基于临床数据的定量评估
Diagnostics (Basel). 2024 Oct 23;14(21):2360. doi: 10.3390/diagnostics14212360.

本文引用的文献

1
As how artificial intelligence is revolutionizing endoscopy.人工智能如何正在彻底改变内窥镜检查。
Clin Endosc. 2024 May;57(3):302-308. doi: 10.5946/ce.2023.230. Epub 2024 Mar 8.
2
New AI model for neoplasia detection and characterisation in inflammatory bowel disease.用于炎症性肠病中肿瘤检测和特征描述的新型人工智能模型。
Gut. 2024 Apr 5;73(5):725-728. doi: 10.1136/gutjnl-2023-330718.
3
The Role of Artificial Intelligence in Colorectal Cancer Screening: Lesion Detection and Lesion Characterization.人工智能在结直肠癌筛查中的作用:病变检测与病变特征分析
Cancers (Basel). 2023 Oct 24;15(21):5126. doi: 10.3390/cancers15215126.
4
Artificial intelligence in inflammatory bowel disease: implications for clinical practice and future directions.炎症性肠病中的人工智能:对临床实践的影响及未来方向
Intest Res. 2023 Jul;21(3):283-294. doi: 10.5217/ir.2023.00020. Epub 2023 Apr 20.
5
Small Bowel Detection for Wireless Capsule Endoscopy Using Convolutional Neural Networks with Temporal Filtering.使用带时间滤波的卷积神经网络进行无线胶囊内镜小肠检测
Diagnostics (Basel). 2022 Jul 31;12(8):1858. doi: 10.3390/diagnostics12081858.
6
Deep Learning Methods for Anatomical Landmark Detection in Video Capsule Endoscopy Images.视频胶囊内镜图像中解剖学地标检测的深度学习方法
Proc Future Technol Conf (2020). 2021 Nov;1288:426-434. doi: 10.1007/978-3-030-63128-4_32. Epub 2020 Oct 31.
7
Artificial Intelligence in Capsule Endoscopy: A Practical Guide to Its Past and Future Challenges.胶囊内镜中的人工智能:应对其过往与未来挑战的实用指南
Diagnostics (Basel). 2021 Sep 20;11(9):1722. doi: 10.3390/diagnostics11091722.
8
Artificial Intelligence in Upper Gastrointestinal Endoscopy.人工智能在上消化道内镜中的应用。
Dig Dis. 2022;40(4):395-408. doi: 10.1159/000518232. Epub 2021 Jul 21.
9
Value of the diving method for capsule endoscopy in the examination of small-intestinal disease: a prospective randomized controlled trial.潜水法胶囊内镜检查在小肠疾病中的价值:一项前瞻性随机对照试验。
Gastrointest Endosc. 2021 Oct;94(4):795-802.e1. doi: 10.1016/j.gie.2021.04.018. Epub 2021 Apr 28.
10
Deep learning for wireless capsule endoscopy: a systematic review and meta-analysis.深度学习在无线胶囊内窥镜中的应用:系统评价和荟萃分析。
Gastrointest Endosc. 2020 Oct;92(4):831-839.e8. doi: 10.1016/j.gie.2020.04.039. Epub 2020 Apr 22.