Suppr超能文献

一种基于深度学习和可解释人工智能的乳腺癌检测方案。

A Deep Learning and Explainable Artificial Intelligence based Scheme for Breast Cancer Detection.

作者信息

Saharan Sandeep, Wani Niyaz Ahmad, Chatterji Shreeya, Kumar Neeraj, Almuhaideb Abdullah Mohammed

机构信息

Department of Computer Science and Engineering, Thapar Institute of Engineering and Technology, Patiala, Punjab, 147004, India.

Department of Networks and Communications, College of Computer Science and Information Technology, Imam Abdulrahman Bin Faisal University, Dammam, Saudi Arabia.

出版信息

Sci Rep. 2025 Sep 1;15(1):32125. doi: 10.1038/s41598-024-80535-7.

Abstract

Incorporating Artificial Intelligence (AI) presents significant potential for transforming multiple aspects of the healthcare sector, encompassing administration, medical prediction, decision-making, and diagnostics. However, its often perceived enigmatic nature is a significant barrier to the widespread use of AI in the medical field. AI systems, specifically deep learning models, have demonstrated exceptional performance, often comparable to that of humans. However, their internal mechanisms sometimes resemble opaque entities, leading to challenges in understanding and placing trust in their decision-making processes. The presence of skepticism has limited their practical application in medical contexts. Given the challenges above, a pioneering proposed scheme, namely "DXAIB", has been introduced. The proposed scheme utilizes a hybrid methodology that integrates Convolutional Neural Networks (CNNs) with the Random Forest (RF) model to detect breast cancer accurately. One distinguishing characteristic of "DXAIB" is its distinctive focus on interpretability of the predictions. The RF technique, employed to predict class labels, implements an automated feature learning process facilitated by the "DXAIB" scheme, which has several convolutional layers. However, "DXAIB" doesn't stop at providing accurate predictions. To tackle the significant concern surrounding the interpretability of AI, an approach called "SHAP" is utilized to offer explanations at both the local and global levels for every prediction. This implies that "DXAIB" not only facilitates the identification of a diagnosis but also provides comprehensive explanations regarding the rationale behind a specific prediction, augmenting transparency and fostering confidence in the decision-making process of the AI system. The proposed "DXAIB" scheme surpasses other cutting-edge schemes in terms of the prediction outcomes it achieves and, therefore, in terms of its explainability findings.

摘要

整合人工智能(AI)为变革医疗保健领域的多个方面带来了巨大潜力,涵盖行政管理、医学预测、决策制定和诊断。然而,其通常被认为具有的神秘性质是人工智能在医学领域广泛应用的重大障碍。人工智能系统,特别是深度学习模型,已经展现出卓越的性能,常常可与人类相媲美。然而,它们的内部机制有时就像不透明的实体,导致在理解和信任其决策过程方面存在挑战。怀疑态度的存在限制了它们在医疗环境中的实际应用。鉴于上述挑战,一种开创性的提议方案,即“DXAIB”已被引入。该提议方案采用了一种混合方法,将卷积神经网络(CNN)与随机森林(RF)模型相结合,以准确检测乳腺癌。“DXAIB”的一个显著特点是其对预测可解释性的独特关注。用于预测类别标签的RF技术,通过具有多个卷积层的“DXAIB”方案实现了自动化特征学习过程。然而,“DXAIB”并不止于提供准确的预测。为了解决围绕人工智能可解释性的重大问题,一种名为“SHAP”的方法被用于为每个预测在局部和全局层面提供解释。这意味着“DXAIB”不仅有助于识别诊断结果,还能针对特定预测背后的原理提供全面解释,增强透明度并提升对人工智能系统决策过程的信心。所提出的“DXAIB”方案在其实现的预测结果方面,进而在其可解释性结果方面,超越了其他前沿方案。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7383/12402191/23feacadebef/41598_2024_80535_Fig1_HTML.jpg

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验