Key Laboratory of Mechanism Theory and Equipment Design of Ministry of Education, Tianjin University, Tianjin, China.
Wellcome/EPSRC Centre for Interventional and Surgical Sciences (WEISS), University College London, London, UK.
Int J Comput Assist Radiol Surg. 2020 Jul;15(7):1085-1094. doi: 10.1007/s11548-020-02148-5. Epub 2020 May 6.
Upper gastrointestinal (GI) endoscopic image documentation has provided an efficient, low-cost solution to address quality control for endoscopic reporting. The problem is, however, challenging for computer-assisted techniques, because different sites have similar appearances. Additionally, across different patients, site appearance variation may be large and inconsistent. Therefore, according to the British and modified Japanese guidelines, we propose a set of oesophagogastroduodenoscopy (EGD) images to be routinely captured and evaluate its efficiency for deep learning-based classification methods.
A novel EGD image dataset standardising upper GI endoscopy to several steps is established following landmarks proposed in guidelines and annotated by an expert clinician. To demonstrate the discrimination of proposed landmarks that enable the generation of an automated endoscopic report, we train several deep learning-based classification models utilising the well-annotated images.
We report results for a clinical dataset composed of 211 patients (comprising a total of 3704 EGD images) acquired during routine upper GI endoscopic examinations. We find close agreement between predicted labels using our method and the ground truth labelled by human experts. We observe the limitation of current static image classification scheme for EGD image classification.
Our study presents a framework for developing automated EGD reports using deep learning. We demonstrate that our method is feasible to address EGD image classification and can lead towards improved performance and additionally qualitatively demonstrate its performance on our dataset.
上消化道(GI)内镜图像文件记录为内镜报告的质量控制提供了一种高效、低成本的解决方案。然而,对于计算机辅助技术来说,这是一个具有挑战性的问题,因为不同部位的外观相似。此外,不同患者的部位外观变化可能很大且不一致。因此,根据英国和改良的日本指南,我们提出了一组常规采集的食管胃十二指肠镜(EGD)图像,并评估其用于基于深度学习的分类方法的效率。
我们按照指南中提出的地标,建立了一个新的 EGD 图像数据集,对几个步骤的上消化道内镜进行标准化,并由专家临床医生进行注释。为了证明所提出的地标能够生成自动化内镜报告的区分能力,我们利用这些精心注释的图像训练了几个基于深度学习的分类模型。
我们报告了一个由 211 名患者(共 3704 个 EGD 图像)组成的临床数据集的结果,这些患者是在常规上消化道内镜检查中采集的。我们发现,使用我们的方法预测的标签与人类专家标记的真实标签之间存在高度一致。我们观察到当前用于 EGD 图像分类的静态图像分类方案的局限性。
我们的研究提出了一种使用深度学习开发自动化 EGD 报告的框架。我们证明了我们的方法能够解决 EGD 图像分类问题,并能够提高性能,此外还在我们的数据集上定性地展示了其性能。