EECE Department, The NorthCap University, 122017, India.
EECE Department, The NorthCap University, 122017, India.
Comput Biol Med. 2022 Aug;147:105680. doi: 10.1016/j.compbiomed.2022.105680. Epub 2022 Jun 2.
A clinically comparable Convolutional Neural Network framework-based technique for performing automated classification of cancer grades and tissue structures in hematoxylin and eosin-stained colon histopathological images is proposed in this paper. It comprised of Enhanced Convolutional Learning Modules (ECLMs), multi-level Attention Learning Module (ALM), and Transitional Modules (TMs). The ECLMs perform a dual mechanism to extract multi-level discriminative spatial features and model cross-channel correlations with fewer computations and effectual avoidance of vanishing gradient issues. The ALM performs focus-refinement through the channel-wise elemental attention learning to accentuate the discriminative channels of the features maps specifically belonging to the important pathological regions and the scale-wise attention learning to facilitate recalibration of features maps at diverse scales. The TMs concatenate the output of these two modules, infuse deep multi-scalar features and eliminate resolution degradation issues. Varied pre-processing techniques are further employed to improvise the generalizability of the proposed network. For performance evaluation, four diverse publicly available datasets (Gland Segmentation challenge(GlaS), Lung Colon(LC)-25000, Kather_Colorectal_Cancer_Texture_Images (Kather-5k), and NCT_HE_CRC_100K(NCT-100k)) and a private dataset Hospital Colon(HosC) are used that further aids in building network invariance against digital variability that exists in real-clinical data. Also, multiple pathologists are involved at every stage of the proposed research and their verification and approval are taken for each step outcome. For the cancer grade classification, the proposed model achieves competitive results for GlaS (Accuracy(97.5%), Precision(97.67%), F1-Score(97.67%), and Recall(97.67%)), LC-25000 (Accuracy(100%), Precision(100%), F1-Score(100%), and Recall(100%)), and HosC(Accuracy(99.45%), Precision(100%), F1-Score(99.65%), and Recall(99.31%)), and while for the tissue structure classification, it achieves results for Kather-5k(Accuracy(98.83%), Precision(98.86%), F1-Score(98.85%), and Recall(98.85%)) and NCT-100k(Accuracy(97.7%), Precision(97.69%), F1-Score(97.71%), and Recall(97.73%)). Furthermore, the reported activation mappings of Gradient-Weighted Class Activation Mappings(Grad-CAM), Occlusion Sensitivity, and Local Interpretable Model-Agnostic Explanations (LIME) evidence that the proposed model can itself learn the similar patterns considered pertinent by the pathologists exclusive of any prerequisite for annotations. In addition, these visualization results are inspected by multiple expert pathologists and provided with a validation score as (GlaS(9.251), LC-25000(9.045), Kather-5k(9.248), NCT-100k(9.262), and HosC(9.853)). This model will provide a secondary referential diagnosis for the pathologists to ease their load and aid them in devising an accurate diagnosis and treatment plan.
本文提出了一种基于临床可比卷积神经网络框架的技术,用于对苏木精和伊红染色的结肠组织病理学图像中的癌症分级和组织结构进行自动分类。它包括增强卷积学习模块(ECLMs)、多层次注意力学习模块(ALM)和过渡模块(TMs)。ECLMs 通过双重机制提取多层次的判别空间特征,并以较少的计算量和有效的避免梯度消失问题来模拟跨通道相关性。ALM 通过通道元素注意力学习进行焦点细化,以突出特征图中属于重要病理区域的判别通道,并通过尺度注意力学习促进不同尺度的特征图重新校准。TMs 串联这两个模块的输出,注入深度多尺度特征,并消除分辨率下降问题。进一步采用各种预处理技术来提高所提出网络的泛化能力。为了进行性能评估,使用了四个不同的公开数据集(Gland Segmentation challenge(GlaS)、Lung Colon(LC)-25000、Kather_Colorectal_Cancer_Texture_Images (Kather-5k)和 NCT_HE_CRC_100K(NCT-100k))和一个私有数据集 Hospital Colon(HosC),这进一步有助于建立对真实临床数据中存在的数字可变性的网络不变性。此外,在研究的每个阶段都有多个病理学家参与,并对每个步骤的结果进行验证和批准。对于癌症分级分类,所提出的模型在 GlaS(Accuracy(97.5%)、Precision(97.67%)、F1-Score(97.67%)和 Recall(97.67%))、LC-25000(Accuracy(100%)、Precision(100%)、F1-Score(100%)和 Recall(100%))和 HosC(Accuracy(99.45%)、Precision(100%)、F1-Score(99.65%)和 Recall(99.31%))方面取得了有竞争力的结果,而对于组织结构分类,它在 Kather-5k(Accuracy(98.83%)、Precision(98.86%)、F1-Score(98.85%)和 Recall(98.85%))和 NCT-100k(Accuracy(97.7%)、Precision(97.69%)、F1-Score(97.71%)和 Recall(97.73%))方面取得了结果。此外,报告的 Gradient-Weighted Class Activation Mappings(Grad-CAM)、遮挡敏感性和 Local Interpretable Model-Agnostic Explanations (LIME)的激活映射证据表明,所提出的模型可以自行学习病理学家认为相关的模式,而无需任何注释的先验要求。此外,这些可视化结果由多个专家病理学家检查,并提供验证分数为(GlaS(9.251)、LC-25000(9.045)、Kather-5k(9.248)、NCT-100k(9.262)和 HosC(9.853))。该模型将为病理学家提供二次参考诊断,以减轻他们的负担,并帮助他们制定准确的诊断和治疗计划。
Comput Biol Med. 2021-9
Comput Methods Programs Biomed. 2022-6
Sheng Wu Yi Xue Gong Cheng Xue Za Zhi. 2024-8-25
Biomed Eng Comput Biol. 2024-8-14
Front Oncol. 2023-7-19
Front Med (Lausanne). 2023-3-8