Department of Pathology, Hallym University Dongtan Sacred Heart Hospital, Hallym University College of Medicine, Hwaseong, Republic of Korea.
Medical Artificial Intelligence Center, Hallym University Sacred Heart Hospital, Anyang, Republic of Korea.
Sci Rep. 2022 Jul 27;12(1):12804. doi: 10.1038/s41598-022-16885-x.
Colonoscopy is an effective tool to detect colorectal lesions and needs the support of pathological diagnosis. This study aimed to develop and validate deep learning models that automatically classify digital pathology images of colon lesions obtained from colonoscopy-related specimen. Histopathological slides of colonoscopic biopsy or resection specimens were collected and grouped into six classes by disease category: adenocarcinoma, tubular adenoma (TA), traditional serrated adenoma (TSA), sessile serrated adenoma (SSA), hyperplastic polyp (HP), and non-specific lesions. Digital photographs were taken of each pathological slide to fine-tune two pre-trained convolutional neural networks, and the model performances were evaluated. A total of 1865 images were included from 703 patients, of which 10% were used as a test dataset. For six-class classification, the mean diagnostic accuracy was 97.3% (95% confidence interval [CI], 96.0-98.6%) by DenseNet-161 and 95.9% (95% CI 94.1-97.7%) by EfficientNet-B7. The per-class area under the receiver operating characteristic curve (AUC) was highest for adenocarcinoma (1.000; 95% CI 0.999-1.000) by DenseNet-161 and TSA (1.000; 95% CI 1.000-1.000) by EfficientNet-B7. The lowest per-class AUCs were still excellent: 0.991 (95% CI 0.983-0.999) for HP by DenseNet-161 and 0.995 for SSA (95% CI 0.992-0.998) by EfficientNet-B7. Deep learning models achieved excellent performances for discriminating adenocarcinoma from non-adenocarcinoma lesions with an AUC of 0.995 or 0.998. The pathognomonic area for each class was appropriately highlighted in digital images by saliency map, particularly focusing epithelial lesions. Deep learning models might be a useful tool to help the diagnosis for pathologic slides of colonoscopy-related specimens.
结肠镜检查是一种有效的检测结直肠病变的工具,需要病理学诊断的支持。本研究旨在开发和验证深度学习模型,以便自动对结肠镜相关标本获得的结直肠病变的数字病理学图像进行分类。收集结肠镜活检或切除标本的组织病理学切片,并按疾病类别分为六类:腺癌、管状腺瘤(TA)、传统锯齿状腺瘤(TSA)、无蒂锯齿状腺瘤(SSA)、增生性息肉(HP)和非特异性病变。对每个病理切片拍摄数字照片,以微调两个预先训练的卷积神经网络,并评估模型性能。共纳入 703 名患者的 1865 张图像,其中 10%作为测试数据集。对于六类分类,DenseNet-161 的平均诊断准确率为 97.3%(95%置信区间[CI],96.0-98.6%),EfficientNet-B7 的平均诊断准确率为 95.9%(95%CI 94.1-97.7%)。DenseNet-161 对腺癌的每个类别受试者工作特征曲线(AUC)的面积最大(1.000;95%CI 0.999-1.000),EfficientNet-B7 对 TSA 的每个类别 AUC 最大(1.000;95%CI 1.000-1.000)。最低的每个类别 AUC 仍然很高:DenseNet-161 对 HP 的 AUC 为 0.991(95%CI 0.983-0.999),EfficientNet-B7 对 SSA 的 AUC 为 0.995(95%CI 0.992-0.998)。深度学习模型在区分腺癌与非腺癌病变方面表现出色,AUC 为 0.995 或 0.998。通过显著图,每个类别的特征区域在数字图像中被适当地突出显示,特别是上皮病变。深度学习模型可能是帮助诊断结肠镜相关标本的病理切片的有用工具。