Ghnemat Rawan, Alodibat Sawsan, Abu Al-Haija Qasem
Department of Computer Science, Princess Sumaya University for Technology, Amman 11941, Jordan.
Department of Cybersecurity, Princess Sumaya University for Technology, Amman 11941, Jordan.
J Imaging. 2023 Aug 30;9(9):177. doi: 10.3390/jimaging9090177.
Recently, deep learning has gained significant attention as a noteworthy division of artificial intelligence (AI) due to its high accuracy and versatile applications. However, one of the major challenges of AI is the need for more interpretability, commonly referred to as the black-box problem. In this study, we introduce an explainable AI model for medical image classification to enhance the interpretability of the decision-making process. Our approach is based on segmenting the images to provide a better understanding of how the AI model arrives at its results. We evaluated our model on five datasets, including the COVID-19 and Pneumonia Chest X-ray dataset, Chest X-ray (COVID-19 and Pneumonia), COVID-19 Image Dataset (COVID-19, Viral Pneumonia, Normal), and COVID-19 Radiography Database. We achieved testing and validation accuracy of 90.6% on a relatively small dataset of 6432 images. Our proposed model improved accuracy and reduced time complexity, making it more practical for medical diagnosis. Our approach offers a more interpretable and transparent AI model that can enhance the accuracy and efficiency of medical diagnosis.
最近,深度学习作为人工智能(AI)的一个重要分支,因其高精度和广泛的应用而备受关注。然而,人工智能的主要挑战之一是需要更高的可解释性,通常被称为黑箱问题。在本研究中,我们引入了一种用于医学图像分类的可解释人工智能模型,以增强决策过程的可解释性。我们的方法基于对图像进行分割,以便更好地理解人工智能模型是如何得出其结果的。我们在五个数据集上评估了我们的模型,包括新冠肺炎和肺炎胸部X光数据集、胸部X光(新冠肺炎和肺炎)、新冠肺炎图像数据集(新冠肺炎、病毒性肺炎、正常)以及新冠肺炎放射数据库。在一个相对较小的包含6432张图像的数据集上,我们实现了90.6%的测试和验证准确率。我们提出的模型提高了准确率并降低了时间复杂度,使其在医学诊断中更具实用性。我们的方法提供了一个更具可解释性和透明度的人工智能模型,可以提高医学诊断的准确性和效率。