Shandong Junteng Medical Technology Co., Ltd, Jinan, China; College of Computer Science, Shaanxi Normal University, Xian, China.
Department of Biomedical Engineering, Case Western Reserve University, Cleveland, OH, USA; Louis Stokes Cleveland VA Medical Center, Cleveland, OH, USA.
Oral Oncol. 2022 Aug;131:105942. doi: 10.1016/j.oraloncology.2022.105942. Epub 2022 Jun 8.
Tissue slides from Oral cavity squamous cell carcinoma (OC-SCC), particularly the epithelial regions, hold morphologic features that are both diagnostic and prognostic. Yet, previously developed approaches for automated epithelium segmentation in OC-SCC have not been independently tested in a multi-center setting. In this study, we aimed to investigate the effectiveness and applicability of a convolutional neural network (CNN) model to perform epithelial segmentation using digitized H&E-stained diagnostic slides from OC-SCC patients in a multi-center setting.
A CNN model was developed to segment the epithelial regions of digitized slides (n = 810), retrospectively collected from five different centers. Deep learning models were trained and validated using well-annotated tissue microarray (TMA) images (n = 212) at various magnifications. The best performing model was locked down and used for independent testing with a total of 478 whole-slide images (WSIs). Manually annotated epithelial regions were used as the reference standard for evaluation. We also compared the model generated results with IHC-stained epithelium (n = 120) as the reference.
The locked-down CNN model trained on the TMA image training cohorts with 10x magnification achieved the best segmentation performance. The locked-down model performed consistently and yielded Pixel Accuracy, Recall Rate, Precision Rate, and Dice Coefficient that ranged from 95.8% to 96.6%, 79.1% to 93.8%, 85.7% to 89.3%, and 82.3% to 89.0%, respectively for the three independent testing WSI cohorts.
The automated model achieved a consistently accurate performance for automated epithelial region segmentation compared to manual annotations. This model could be integrated into a computer-aided diagnosis or prognosis system.
口腔鳞状细胞癌(OC-SCC)的组织切片,特别是上皮区域,具有诊断和预后的形态特征。然而,之前开发的用于 OC-SCC 自动上皮分割的方法尚未在多中心环境中进行独立测试。在这项研究中,我们旨在研究卷积神经网络(CNN)模型在多中心环境中使用 OC-SCC 患者的数字化 H&E 染色诊断幻灯片进行上皮分割的有效性和适用性。
开发了一种 CNN 模型来分割数字化幻灯片(n=810)的上皮区域,这些幻灯片是从五个不同的中心回顾性收集的。使用各种放大倍数的组织微阵列(TMA)图像(n=212)对深度学习模型进行训练和验证。表现最佳的模型被锁定,并用于总共 478 张全幻灯片图像(WSI)的独立测试。手动注释的上皮区域被用作评估的参考标准。我们还比较了模型生成的结果与免疫组织化学染色的上皮(n=120)作为参考。
在 10x 放大倍率的 TMA 图像训练队列上训练的锁定 CNN 模型实现了最佳分割性能。锁定模型表现一致,产生的像素精度、召回率、精度率和 Dice 系数范围分别为 95.8%至 96.6%、79.1%至 93.8%、85.7%至 89.3%和 82.3%至 89.0%,适用于三个独立的测试 WSI 队列。
与手动注释相比,自动化模型在自动上皮区域分割方面实现了一致的准确性。该模型可以集成到计算机辅助诊断或预后系统中。