Suppr超能文献

乳腺癌检测与风险预测中的可解释人工智能:一项系统综述。

Explainable artificial intelligence in breast cancer detection and risk prediction: A systematic scoping review.

作者信息

Ghasemi Amirehsan, Hashtarkhani Soheil, Schwartz David L, Shaban-Nejad Arash

机构信息

Department of Pediatrics, Center for Biomedical Informatics, College of Medicine University of Tennessee Health Science Center Memphis Tennessee USA.

The Bredesen Center for Interdisciplinary Research and Graduate Education University of Tennessee Knoxville Tennessee USA.

出版信息

Cancer Innov. 2024 Jul 3;3(5):e136. doi: 10.1002/cai2.136. eCollection 2024 Oct.

Abstract

With the advances in artificial intelligence (AI), data-driven algorithms are becoming increasingly popular in the medical domain. However, due to the nonlinear and complex behavior of many of these algorithms, decision-making by such algorithms is not trustworthy for clinicians and is considered a black-box process. Hence, the scientific community has introduced explainable artificial intelligence (XAI) to remedy the problem. This systematic scoping review investigates the application of XAI in breast cancer detection and risk prediction. We conducted a comprehensive search on Scopus, IEEE Explore, PubMed, and Google Scholar (first 50 citations) using a systematic search strategy. The search spanned from January 2017 to July 2023, focusing on peer-reviewed studies implementing XAI methods in breast cancer datasets. Thirty studies met our inclusion criteria and were included in the analysis. The results revealed that SHapley Additive exPlanations (SHAP) is the top model-agnostic XAI technique in breast cancer research in terms of usage, explaining the model prediction results, diagnosis and classification of biomarkers, and prognosis and survival analysis. Additionally, the SHAP model primarily explained tree-based ensemble machine learning models. The most common reason is that SHAP is model agnostic, which makes it both popular and useful for explaining any model prediction. Additionally, it is relatively easy to implement effectively and completely suits performant models, such as tree-based models. Explainable AI improves the transparency, interpretability, fairness, and trustworthiness of AI-enabled health systems and medical devices and, ultimately, the quality of care and outcomes.

摘要

随着人工智能(AI)的发展,数据驱动的算法在医学领域越来越受欢迎。然而,由于许多此类算法具有非线性和复杂的行为,临床医生认为此类算法的决策不可靠,并且将其视为一个黑箱过程。因此,科学界引入了可解释人工智能(XAI)来解决这一问题。本系统综述探讨了XAI在乳腺癌检测和风险预测中的应用。我们使用系统搜索策略在Scopus、IEEE Xplore、PubMed和谷歌学术(前50条引用)上进行了全面搜索。搜索时间跨度为2017年1月至2023年7月,重点关注在乳腺癌数据集中实施XAI方法的同行评审研究。30项研究符合我们的纳入标准并被纳入分析。结果显示,就使用情况、解释模型预测结果、生物标志物的诊断和分类以及预后和生存分析而言,SHapley加性解释(SHAP)是乳腺癌研究中最常用的模型无关XAI技术。此外,SHAP模型主要解释基于树的集成机器学习模型。最常见的原因是SHAP与模型无关,这使得它在解释任何模型预测时既受欢迎又有用。此外,它相对容易有效实施,并且完全适用于高性能模型,如基于树的模型。可解释人工智能提高了人工智能支持的卫生系统和医疗设备的透明度、可解释性、公平性和可信度,并最终提高了医疗质量和治疗效果。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/72e8/11488119/cde7251b26e3/CAI2-3-e136-g001.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验