Samriddhi College, Lokanthali, Bhaktapur, Kathmandu, Nepal.
School of Engineering and Technology, Central Queensland University, Norman Gardens, 4701, Rockhampton, Queensland, Australia.
Comput Biol Med. 2022 Nov;150:106156. doi: 10.1016/j.compbiomed.2022.106156. Epub 2022 Oct 3.
Chest X-ray (CXR) images are considered useful to monitor and investigate a variety of pulmonary disorders such as COVID-19, Pneumonia, and Tuberculosis (TB). With recent technological advancements, such diseases may now be recognized more precisely using computer-assisted diagnostics. Without compromising the classification accuracy and better feature extraction, deep learning (DL) model to predict four different categories is proposed in this study. The proposed model is validated with publicly available datasets of 7132 chest x-ray (CXR) images. Furthermore, results are interpreted and explained using Gradient-weighted Class Activation Mapping (Grad-CAM), Local Interpretable Modelagnostic Explanation (LIME), and SHapley Additive exPlanation (SHAP) for better understandably. Initially, convolution features are extracted to collect high-level object-based information. Next, shapely values from SHAP, predictability results from LIME, and heatmap from Grad-CAM are used to explore the black-box approach of the DL model, achieving average test accuracy of 94.31 ± 1.01% and validation accuracy of 94.54 ± 1.33 for 10-fold cross validation. Finally, in order to validate the model and qualify medical risk, medical sensations of classification are taken to consolidate the explanations generated from the eXplainable Artificial Intelligence (XAI) framework. The results suggest that XAI and DL models give clinicians/medical professionals persuasive and coherent conclusions related to the detection and categorization of COVID-19, Pneumonia, and TB.
胸部 X 光(CXR)图像被认为可用于监测和研究各种肺部疾病,如 COVID-19、肺炎和肺结核(TB)。随着最近技术的进步,现在可能可以使用计算机辅助诊断更准确地识别这些疾病。在不影响分类准确性和更好的特征提取的情况下,本研究提出了一种用于预测四个不同类别的深度学习(DL)模型。该模型使用公开的 7132 张胸部 X 光(CXR)图像数据集进行验证。此外,使用梯度加权类激活映射(Grad-CAM)、局部可解释模型不可知解释(LIME)和 Shapley 加法解释(SHAP)来解释和解释结果,以更好地理解。最初,提取卷积特征以收集基于对象的高级信息。接下来,使用 SHAP 的 Shapely 值、LIME 的可预测性结果和 Grad-CAM 的热图来探索 DL 模型的黑盒方法,在 10 倍交叉验证中实现了 94.31±1.01%的平均测试精度和 94.54±1.33%的验证精度。最后,为了验证模型并确定医疗风险,对分类的医学感觉进行了评估,以整合可解释人工智能(XAI)框架生成的解释。结果表明,XAI 和 DL 模型为临床医生/医疗专业人员提供了与 COVID-19、肺炎和 TB 的检测和分类相关的有说服力且一致的结论。