Gong Eun Jeong, Bang Chang Seok, Lee Jae Jun, Seo Seung In, Yang Young Joo, Baik Gwang Ho, Kim Jong Wook
Department of Internal Medicine, Hallym University College of Medicine, Chuncheon 24253, Korea.
Institute for Liver and Digestive Diseases, Hallym University, Chuncheon 24253, Korea.
J Pers Med. 2022 Jun 12;12(6):963. doi: 10.3390/jpm12060963.
The authors previously developed deep-learning models for the prediction of colorectal polyp histology (advanced colorectal cancer, early cancer/high-grade dysplasia, tubular adenoma with or without low-grade dysplasia, or non-neoplasm) from endoscopic images. While the model achieved 67.3% internal-test accuracy and 79.2% external-test accuracy, model development was labour-intensive and required specialised programming expertise. Moreover, the 240-image external-test dataset included only three advanced and eight early cancers, so it was difficult to generalise model performance. These limitations may be mitigated by deep-learning models developed using no-code platforms.
To establish no-code platform-based deep-learning models for the prediction of colorectal polyp histology from white-light endoscopy images and compare their diagnostic performance with traditional models.
The same 3828 endoscopic images used to establish previous models were used to establish new models based on no-code platforms Neuro-T, VLAD, and Create ML-Image Classifier. A prospective multicentre validation study was then conducted using 3818 novel images. The primary outcome was the accuracy of four-category prediction.
The model established using Neuro-T achieved the highest internal-test accuracy (75.3%, 95% confidence interval: 71.0-79.6%) and external-test accuracy (80.2%, 76.9-83.5%) but required the longest training time. In contrast, the model established using Create ML-Image Classifier required only 3 min for training and still achieved 72.7% (70.8-74.6%) external-test accuracy. Attention map analysis revealed that the imaging features used by the no-code deep-learning models were similar to those used by endoscopists during visual inspection.
No-code deep-learning tools allow for the rapid development of models with high accuracy for predicting colorectal polyp histology.
作者之前开发了深度学习模型,用于从内镜图像预测结直肠息肉组织学类型(进展期结直肠癌、早期癌症/高级别异型增生、伴有或不伴有低级别异型增生的管状腺瘤,或非肿瘤性病变)。虽然该模型在内部测试中的准确率为67.3%,在外部测试中的准确率为79.2%,但模型开发劳动强度大,且需要专业编程知识。此外,包含240张图像的外部测试数据集仅包括3例进展期癌症和8例早期癌症,因此难以概括模型性能。使用无代码平台开发的深度学习模型可能会缓解这些局限性。
建立基于无代码平台的深度学习模型,用于从白光内镜图像预测结直肠息肉组织学类型,并将其诊断性能与传统模型进行比较。
使用与建立之前模型相同的3828张内镜图像,基于无代码平台Neuro-T、VLAD和Create ML-图像分类器建立新模型。然后使用3818张新图像进行前瞻性多中心验证研究。主要结局是四类预测的准确率。
使用Neuro-T建立的模型在内部测试中准确率最高(75.3%,95%置信区间:71.0-79.6%),在外部测试中准确率为80.2%(76.9-83.5%),但训练时间最长。相比之下,使用Create ML-图像分类器建立的模型训练仅需3分钟,外部测试准确率仍达到72.7%(70.8-74.6%)。注意力图分析显示,无代码深度学习模型使用的成像特征与内镜医师在视觉检查时使用的特征相似。
无代码深度学习工具能够快速开发出预测结直肠息肉组织学类型的高精度模型。