Suppr超能文献

通过在内窥镜胶囊图像中应用卷积神经网络揭示选定胃肠道(GI)器官的边界

Revealing the Boundaries of Selected Gastro-Intestinal (GI) Organs by Implementing CNNs in Endoscopic Capsule Images.

作者信息

Athanasiou Sofia A, Sergaki Eleftheria S, Polydorou Andreas A, Polydorou Alexios A, Stavrakakis George S, Afentakis Nikolaos M, Vardiambasis Ioannis O, Zervakis Michail E

机构信息

School of Electrical and Computer Engineering, Technical University of Crete, 73100 Chania, Greece.

Department of Electronic Engineering, Hellenic Mediterranean University, 73133 Chania, Greece.

出版信息

Diagnostics (Basel). 2023 Feb 23;13(5):865. doi: 10.3390/diagnostics13050865.

Abstract

PURPOSE

The detection of where an organ starts and where it ends is achievable and, since this information can be delivered in real time, it could be quite important for several reasons. For one, by having the practical knowledge of the Wireless Endoscopic Capsule (WEC) transition through an organ's domain, we are able to align and control the endoscopic operation with any other possible protocol, i.e., delivering some form of treatment on the spot. Another is having greater anatomical topography information per session, therefore treating the individual in detail (not "in general"). Even the fact that by gathering more accurate information for a patient by merely implementing clever software procedures is a task worth exploiting, since the problems we have to overcome in real-time processing of the capsule findings (i.e., wireless transfer of images to another unit that will apply the necessary real time computations) are still challenging. This study proposes a computer-aided detection (CAD) tool, a CNN algorithm deployed to run on field programmable gate array (FPGA), able to automatically track the capsule transitions through the entrance (gate) of esophagus, stomach, small intestine and colon, in real time. The input data are the wireless transmitted image shots of the capsule's camera (while the endoscopy capsule is operating).

METHODS

We developed and evaluated three distinct multiclass classification CNNs, trained on the same dataset of total 5520 images extracted by 99 capsule videos (total 1380 frames from each organ of interest). The proposed CNNs differ in size and number of convolution filters. The confusion matrix is obtained by training each classifier and evaluating the trained model on an independent test dataset comprising 496 images extracted by 39 capsule videos, 124 from each GI organ. The test dataset was further evaluated by one endoscopist, and his findings were compared with CNN-based results. The statistically significant of predictions between the four classes of each model and the comparison between the three distinct models is evaluated by calculating the -values and chi-square test for multi class. The comparison between the three models is carried out by calculating the macro average F1 score and Mattheus correlation coefficient (MCC). The quality of the best CNN model is estimated by calculations of sensitivity and specificity.

RESULTS

Our experimental results of independent validation demonstrate that the best of our developed models addressed this topological problem by exhibiting an overall sensitivity (96.55%) and specificity of (94.73%) in the esophagus, (81.08% sensitivity and 96.55% specificity) in the stomach, (89.65% sensitivity and 97.89% specificity) in the small intestine and (100% sensitivity and 98.94% specificity) in the colon. The average macro accuracy is 95.56%, the average macro sensitivity is 91.82%.

摘要

目的

确定器官的起止位置是可以实现的,而且由于该信息能够实时传递,基于多种原因,它可能相当重要。一方面,通过掌握无线内镜胶囊(WEC)在器官区域内的移动情况,我们能够将内镜操作与任何其他可能的方案进行协调和控制,即在现场进行某种形式的治疗。另一方面,每次检查能获得更详细的解剖形态信息,从而对个体进行精准治疗(而非“一概而论”)。甚至仅仅通过实施巧妙的软件程序为患者收集更准确的信息这一事实,也是一项值得探索的任务,因为我们在实时处理胶囊检查结果时(即通过无线方式将图像传输到另一个进行必要实时计算的单元)必须克服的问题仍然具有挑战性。本研究提出了一种计算机辅助检测(CAD)工具,即一种部署在现场可编程门阵列(FPGA)上运行的卷积神经网络(CNN)算法,能够实时自动跟踪胶囊通过食管、胃、小肠和结肠的入口(门)的转换过程。输入数据是胶囊摄像头无线传输的图像(在内镜胶囊操作时)。

方法

我们开发并评估了三种不同的多类分类CNN,它们在由99个胶囊视频提取的总共5520张图像的同一数据集上进行训练(每个感兴趣器官总共1380帧)。所提出的CNN在卷积滤波器的大小和数量上有所不同。通过训练每个分类器并在由39个胶囊视频提取的包含496张图像的独立测试数据集上评估训练后的模型来获得混淆矩阵,每个胃肠道器官各124张。该测试数据集由一名内镜医师进一步评估,并将他的结果与基于CNN的结果进行比较。通过计算每个模型的四类之间预测的p值和多类卡方检验来评估预测的统计学显著性以及三个不同模型之间的比较。通过计算宏平均F1分数和马修斯相关系数(MCC)来进行三个模型之间的比较。通过计算灵敏度和特异性来估计最佳CNN模型的质量。

结果

我们独立验证的实验结果表明,我们开发的最佳模型通过在食管中表现出总体灵敏度(96.55%)和特异性(94.73%)、在胃中(灵敏度81.08%,特异性96.55%)、在小肠中(灵敏度89.65%,特异性97.89%)以及在结肠中(灵敏度100%,特异性98.94%)解决了这个拓扑问题。平均宏准确率为95.56%,平均宏灵敏度为91.82%。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a123/10000441/654c59b9a3dc/diagnostics-13-00865-g001.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验