Department of Internal Medicine and Liver Research Institute, Seoul National University College of Medicine, Seoul, Korea.
Bio Imaging and Signal Processing Laboratory, Department of Bio and Brain Engineering, Korea Advanced Institute for Science and Technology, Daejeon, Korea.
Gastrointest Endosc. 2022 Feb;95(2):258-268.e10. doi: 10.1016/j.gie.2021.08.022. Epub 2021 Sep 4.
BACKGROUND AND AIMS: Endoscopic differential diagnoses of gastric mucosal lesions (benign gastric ulcer, early gastric cancer [EGC], and advanced gastric cancer) remain challenging. We aimed to develop and validate convolutional neural network-based artificial intelligence (AI) models: lesion detection, differential diagnosis (AI-DDx), and invasion depth (AI-ID; pT1a vs pT1b among EGC) models. METHODS: This study included 1366 consecutive patients with gastric mucosal lesions from 2 referral centers in Korea. One representative endoscopic image from each patient was used. Histologic diagnoses were set as the criterion standard. Performance of the AI-DDx (training/internal/external validation set, 1009/112/245) and AI-ID (training/internal/external validation set, 620/68/155) was compared with visual diagnoses by independent endoscopists (stratified by novice [<1 year of experience], intermediate [2-3 years of experience], and expert [>5 years of experience]) and EUS results, respectively. RESULTS: The AI-DDx showed good diagnostic performance for both internal (area under the receiver operating characteristic curve [AUROC] = .86) and external validation (AUROC = .86). The performance of the AI-DDx was better than that of novice (AUROC = .82, P = .01) and intermediate endoscopists (AUROC = .84, P = .02) but was comparable with experts (AUROC = .89, P = .12) in the external validation set. The AI-ID showed a fair performance in both internal (AUROC = .78) and external validation sets (AUROC = .73), which were significantly better than EUS results performed by experts (internal validation, AUROC = .62; external validation, AUROC = .56; both P < .001). CONCLUSIONS: The AI-DDx was comparable with experts and outperformed novice and intermediate endoscopists for the differential diagnosis of gastric mucosal lesions. The AI-ID performed better than EUS for evaluation of invasion depth.
背景与目的:胃黏膜病变(良性胃溃疡、早期胃癌[EGC]和进展期胃癌)的内镜鉴别诊断仍然具有挑战性。我们旨在开发和验证基于卷积神经网络的人工智能(AI)模型:病变检测、鉴别诊断(AI-DDx)和浸润深度(AI-ID;EGC 中的 pT1a 与 pT1b)模型。 方法:本研究纳入了来自韩国 2 家转诊中心的 1366 例连续胃黏膜病变患者。每位患者均使用 1 张代表性的内镜图像。组织学诊断被设定为标准。通过独立内镜医生(按新手[<1 年经验]、中级[2-3 年经验]和专家[>5 年经验]分层)和 EUS 结果,比较 AI-DDx(训练/内部/外部验证集,1009/112/245)和 AI-ID(训练/内部/外部验证集,620/68/155)的性能。 结果:AI-DDx 在内部验证(AUROC=0.86)和外部验证(AUROC=0.86)中均表现出良好的诊断性能。AI-DDx 的性能优于新手(AUROC=0.82,P=0.01)和中级内镜医生(AUROC=0.84,P=0.02),但与外部验证集的专家相当(AUROC=0.89,P=0.12)。AI-ID 在内部(AUROC=0.78)和外部验证集(AUROC=0.73)中表现出中等性能,明显优于专家进行的 EUS 结果(内部验证,AUROC=0.62;外部验证,AUROC=0.56;均 P<0.001)。 结论:AI-DDx 在胃黏膜病变的鉴别诊断方面与专家相当,且优于新手和中级内镜医生。AI-ID 在评估浸润深度方面优于 EUS。
Ultraschall Med. 2024-8
Diagnostics (Basel). 2023-8-30
World J Gastrointest Endosc. 2023-5-16
J Cancer Res Clin Oncol. 2023-8
Diagnostics (Basel). 2022-12-13