Binzagr Faisal
Department of Computer Science, King Abdulaziz University, Rabigh, Saudi Arabia.
Front Med (Lausanne). 2024 Apr 15;11:1349373. doi: 10.3389/fmed.2024.1349373. eCollection 2024.
Although the detection procedure has been shown to be highly effective, there are several obstacles to overcome in the usage of AI-assisted cancer cell detection in clinical settings. These issues stem mostly from the failure to identify the underlying processes. Because AI-assisted diagnosis does not offer a clear decision-making process, doctors are dubious about it. In this instance, the advent of Explainable Artificial Intelligence (XAI), which offers explanations for prediction models, solves the AI black box issue. The SHapley Additive exPlanations (SHAP) approach, which results in the interpretation of model predictions, is the main emphasis of this work. The intermediate layer in this study was a hybrid model made up of three Convolutional Neural Networks (CNNs) (InceptionV3, InceptionResNetV2, and VGG16) that combined their predictions. The KvasirV2 dataset, which comprises pathological symptoms associated to cancer, was used to train the model. Our combined model yielded an accuracy of 93.17% and an F1 score of 97%. After training the combined model, we use SHAP to analyze images from these three groups to provide an explanation of the decision that affects the model prediction.
尽管检测程序已被证明非常有效,但在临床环境中使用人工智能辅助癌细胞检测仍有几个障碍需要克服。这些问题主要源于未能识别潜在过程。由于人工智能辅助诊断没有提供明确的决策过程,医生对此持怀疑态度。在这种情况下,可解释人工智能(XAI)的出现为预测模型提供了解释,解决了人工智能黑箱问题。本研究的主要重点是SHapley加性解释(SHAP)方法,该方法可对模型预测进行解释。本研究中的中间层是一个由三个卷积神经网络(CNN)(InceptionV3、InceptionResNetV2和VGG16)组成的混合模型,它们将各自预测结果进行了合并。使用包含与癌症相关病理症状的KvasirV2数据集对模型进行训练。我们的组合模型准确率达到93.17%,F1分数为97%。在训练完组合模型后,我们使用SHAP分析这三组的图像,以解释影响模型预测的决策。