Department of Computer Science, HITEC University, Taxila, 47080, Pakistan.
Department of Artificial Intelligence, College of Computer Engineering and Science, Prince Mohammad Bin Fahd University, Al Khobar, Saudi Arabia.
Comput Biol Med. 2024 Nov;182:109183. doi: 10.1016/j.compbiomed.2024.109183. Epub 2024 Oct 2.
Explainable artificial intelligence (XAI) aims to offer machine learning (ML) methods that enable people to comprehend, properly trust, and create more explainable models. In medical imaging, XAI has been adopted to interpret deep learning black box models to demonstrate the trustworthiness of machine decisions and predictions. In this work, we proposed a deep learning and explainable AI-based framework for segmenting and classifying brain tumors. The proposed framework consists of two parts. The first part, encoder-decoder-based DeepLabv3+ architecture, is implemented with Bayesian Optimization (BO) based hyperparameter initialization. The different scales are performed, and features are extracted through the Atrous Spatial Pyramid Pooling (ASPP) technique. The extracted features are passed to the output layer for tumor segmentation. In the second part of the proposed framework, two customized models have been proposed named Inverted Residual Bottleneck 96 layers (IRB-96) and Inverted Residual Bottleneck Self-Attention (IRB-Self). Both models are trained on the selected brain tumor datasets and extracted features from the global average pooling and self-attention layers. Features are fused using a serial approach, and classification is performed. The BO-based hyperparameters optimization of the neural network classifiers is performed and the classification results have been optimized. An XAI method named LIME is implemented to check the interpretability of the proposed models. The experimental process of the proposed framework was performed on the Figshare dataset, and an average segmentation accuracy of 92.68 % and classification accuracy of 95.42 % were obtained, respectively. Compared with state-of-the-art techniques, the proposed framework shows improved accuracy.
可解释人工智能(XAI)旨在提供机器学习(ML)方法,使人们能够理解、正确信任和创建更具解释性的模型。在医学成像中,XAI 已被用于解释深度学习黑盒模型,以展示机器决策和预测的可信度。在这项工作中,我们提出了一种基于深度学习和可解释人工智能的脑肿瘤分割和分类框架。所提出的框架由两部分组成。第一部分是基于贝叶斯优化(BO)的 DeepLabv3+ 架构编码器-解码器,用于进行超参数初始化。通过空洞空间金字塔池化(ASPP)技术执行不同的尺度,并提取特征。提取的特征被传递到输出层进行肿瘤分割。在所提出框架的第二部分中,提出了两个名为 Inverted Residual Bottleneck 96 layers(IRB-96)和 Inverted Residual Bottleneck Self-Attention(IRB-Self)的定制模型。这两个模型都在选定的脑肿瘤数据集上进行训练,并从全局平均池化层和自注意力层中提取特征。使用串行方法融合特征,并进行分类。对神经网络分类器的 BO 超参数优化进行了优化,并优化了分类结果。实现了一种名为 LIME 的 XAI 方法来检查所提出模型的可解释性。所提出框架的实验过程是在 Figshare 数据集上进行的,分别获得了 92.68%的平均分割准确率和 95.42%的分类准确率。与最先进的技术相比,所提出的框架显示出了更高的准确性。