Department of Computer Science and Engineering, Thapar Institute of Engineering and Technology, Patiala (PIN: 147004), Punjab, India.
Comput Methods Programs Biomed. 2024 Jan;243:107879. doi: 10.1016/j.cmpb.2023.107879. Epub 2023 Oct 24.
Artificial intelligence (AI) has several uses in the healthcare industry, some of which include healthcare management, medical forecasting, practical making of decisions, and diagnosis. AI technologies have reached human-like performance, but their use is limited since they are still largely viewed as opaque black boxes. This distrust remains the primary factor for their limited real application, particularly in healthcare. As a result, there is a need for interpretable predictors that provide better predictions and also explain their predictions.
This study introduces "DeepXplainer", a new interpretable hybrid deep learning-based technique for detecting lung cancer and providing explanations of the predictions. This technique is based on a convolutional neural network and XGBoost. XGBoost is used for class label prediction after "DeepXplainer" has automatically learned the features of the input using its many convolutional layers. For providing explanations or explainability of the predictions, an explainable artificial intelligence method known as "SHAP" is implemented.
The open-source "Survey Lung Cancer" dataset was processed using this method. On multiple parameters, including accuracy, sensitivity, F1-score, etc., the proposed method outperformed the existing methods. The proposed method obtained an accuracy of 97.43%, a sensitivity of 98.71%, and an F1-score of 98.08. After the model has made predictions with this high degree of accuracy, each prediction is explained by implementing an explainable artificial intelligence method at both the local and global levels.
A deep learning-based classification model for lung cancer is proposed with three primary components: one for feature learning, another for classification, and a third for providing explanations for the predictions made by the proposed hybrid (ConvXGB) model. The proposed "DeepXplainer" has been evaluated using a variety of metrics, and the results demonstrate that it outperforms the current benchmarks. Providing explanations for the predictions, the proposed approach may help doctors in detecting and treating lung cancer patients more effectively.
人工智能(AI)在医疗保健行业中有多种用途,包括医疗保健管理、医学预测、实际决策和诊断。人工智能技术已经达到了类似人类的性能,但由于它们仍然被广泛视为不透明的黑盒子,因此其应用受到限制。这种不信任仍然是其实际应用有限的主要因素,尤其是在医疗保健领域。因此,需要可解释的预测器,以提供更好的预测并解释其预测。
本研究引入了“DeepXplainer”,这是一种新的基于可解释混合深度学习的技术,用于检测肺癌并提供预测解释。该技术基于卷积神经网络和 XGBoost。在“DeepXplainer”使用其许多卷积层自动学习输入特征之后,XGBoost 用于进行类别标签预测。为了提供预测的解释或可解释性,实现了一种称为“SHAP”的可解释人工智能方法。
该方法处理了开源的“Survey Lung Cancer”数据集。在多个参数上,包括准确性、灵敏度、F1 分数等,该方法都优于现有方法。该方法的准确性为 97.43%,灵敏度为 98.71%,F1 得分为 98.08%。在该模型以如此高的精度进行预测之后,通过在本地和全局两个层面上实现可解释人工智能方法,为每个预测提供了解释。
提出了一种基于深度学习的肺癌分类模型,它具有三个主要组成部分:一个用于特征学习,另一个用于分类,第三个用于为所提出的混合(ConvXGB)模型的预测提供解释。所提出的“DeepXplainer”已经使用各种指标进行了评估,结果表明它优于当前的基准。通过为预测提供解释,该方法可能有助于医生更有效地检测和治疗肺癌患者。