• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

乳腺癌检测与风险预测中的可解释人工智能:一项系统综述。

Explainable artificial intelligence in breast cancer detection and risk prediction: A systematic scoping review.

作者信息

Ghasemi Amirehsan, Hashtarkhani Soheil, Schwartz David L, Shaban-Nejad Arash

机构信息

Department of Pediatrics, Center for Biomedical Informatics, College of Medicine University of Tennessee Health Science Center Memphis Tennessee USA.

The Bredesen Center for Interdisciplinary Research and Graduate Education University of Tennessee Knoxville Tennessee USA.

出版信息

Cancer Innov. 2024 Jul 3;3(5):e136. doi: 10.1002/cai2.136. eCollection 2024 Oct.

DOI:10.1002/cai2.136
PMID:39430216
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC11488119/
Abstract

With the advances in artificial intelligence (AI), data-driven algorithms are becoming increasingly popular in the medical domain. However, due to the nonlinear and complex behavior of many of these algorithms, decision-making by such algorithms is not trustworthy for clinicians and is considered a black-box process. Hence, the scientific community has introduced explainable artificial intelligence (XAI) to remedy the problem. This systematic scoping review investigates the application of XAI in breast cancer detection and risk prediction. We conducted a comprehensive search on Scopus, IEEE Explore, PubMed, and Google Scholar (first 50 citations) using a systematic search strategy. The search spanned from January 2017 to July 2023, focusing on peer-reviewed studies implementing XAI methods in breast cancer datasets. Thirty studies met our inclusion criteria and were included in the analysis. The results revealed that SHapley Additive exPlanations (SHAP) is the top model-agnostic XAI technique in breast cancer research in terms of usage, explaining the model prediction results, diagnosis and classification of biomarkers, and prognosis and survival analysis. Additionally, the SHAP model primarily explained tree-based ensemble machine learning models. The most common reason is that SHAP is model agnostic, which makes it both popular and useful for explaining any model prediction. Additionally, it is relatively easy to implement effectively and completely suits performant models, such as tree-based models. Explainable AI improves the transparency, interpretability, fairness, and trustworthiness of AI-enabled health systems and medical devices and, ultimately, the quality of care and outcomes.

摘要

随着人工智能(AI)的发展,数据驱动的算法在医学领域越来越受欢迎。然而,由于许多此类算法具有非线性和复杂的行为,临床医生认为此类算法的决策不可靠,并且将其视为一个黑箱过程。因此,科学界引入了可解释人工智能(XAI)来解决这一问题。本系统综述探讨了XAI在乳腺癌检测和风险预测中的应用。我们使用系统搜索策略在Scopus、IEEE Xplore、PubMed和谷歌学术(前50条引用)上进行了全面搜索。搜索时间跨度为2017年1月至2023年7月,重点关注在乳腺癌数据集中实施XAI方法的同行评审研究。30项研究符合我们的纳入标准并被纳入分析。结果显示,就使用情况、解释模型预测结果、生物标志物的诊断和分类以及预后和生存分析而言,SHapley加性解释(SHAP)是乳腺癌研究中最常用的模型无关XAI技术。此外,SHAP模型主要解释基于树的集成机器学习模型。最常见的原因是SHAP与模型无关,这使得它在解释任何模型预测时既受欢迎又有用。此外,它相对容易有效实施,并且完全适用于高性能模型,如基于树的模型。可解释人工智能提高了人工智能支持的卫生系统和医疗设备的透明度、可解释性、公平性和可信度,并最终提高了医疗质量和治疗效果。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/72e8/11488119/5ff978ed91c4/CAI2-3-e136-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/72e8/11488119/cde7251b26e3/CAI2-3-e136-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/72e8/11488119/820edcd5cd54/CAI2-3-e136-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/72e8/11488119/7b3d8fd95a8f/CAI2-3-e136-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/72e8/11488119/09d57c76ac8e/CAI2-3-e136-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/72e8/11488119/39cb36af93e5/CAI2-3-e136-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/72e8/11488119/5ff978ed91c4/CAI2-3-e136-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/72e8/11488119/cde7251b26e3/CAI2-3-e136-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/72e8/11488119/820edcd5cd54/CAI2-3-e136-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/72e8/11488119/7b3d8fd95a8f/CAI2-3-e136-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/72e8/11488119/09d57c76ac8e/CAI2-3-e136-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/72e8/11488119/39cb36af93e5/CAI2-3-e136-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/72e8/11488119/5ff978ed91c4/CAI2-3-e136-g004.jpg

相似文献

1
Explainable artificial intelligence in breast cancer detection and risk prediction: A systematic scoping review.乳腺癌检测与风险预测中的可解释人工智能:一项系统综述。
Cancer Innov. 2024 Jul 3;3(5):e136. doi: 10.1002/cai2.136. eCollection 2024 Oct.
2
Model-agnostic explainable artificial intelligence tools for severity prediction and symptom analysis on Indian COVID-19 data.用于印度新冠疫情数据严重程度预测和症状分析的模型无关可解释人工智能工具。
Front Artif Intell. 2023 Dec 4;6:1272506. doi: 10.3389/frai.2023.1272506. eCollection 2023.
3
Exploring the Applications of Explainability in Wearable Data Analytics: Systematic Literature Review.探索可解释性在可穿戴数据分析中的应用:系统文献综述
J Med Internet Res. 2024 Dec 24;26:e53863. doi: 10.2196/53863.
4
Applications of Explainable Artificial Intelligence in Diagnosis and Surgery.可解释人工智能在诊断与手术中的应用。
Diagnostics (Basel). 2022 Jan 19;12(2):237. doi: 10.3390/diagnostics12020237.
5
Explainable artificial intelligence in skin cancer recognition: A systematic review.皮肤癌识别中的可解释人工智能:一项系统综述。
Eur J Cancer. 2022 May;167:54-69. doi: 10.1016/j.ejca.2022.02.025. Epub 2022 Apr 5.
6
Local interpretable model-agnostic explanation approach for medical imaging analysis: A systematic literature review.用于医学影像分析的局部可解释模型无关解释方法:一项系统文献综述。
Comput Biol Med. 2025 Feb;185:109569. doi: 10.1016/j.compbiomed.2024.109569. Epub 2024 Dec 19.
7
Systematic literature review on the application of explainable artificial intelligence in palliative care studies.关于可解释人工智能在姑息治疗研究中应用的系统文献综述。
Int J Med Inform. 2025 Aug;200:105914. doi: 10.1016/j.ijmedinf.2025.105914. Epub 2025 Apr 8.
8
Explainability and white box in drug discovery.药物发现中的可解释性和白盒。
Chem Biol Drug Des. 2023 Jul;102(1):217-233. doi: 10.1111/cbdd.14262. Epub 2023 Apr 27.
9
The enlightening role of explainable artificial intelligence in medical & healthcare domains: A systematic literature review.可解释人工智能在医疗保健领域中的启示作用:系统文献综述。
Comput Biol Med. 2023 Nov;166:107555. doi: 10.1016/j.compbiomed.2023.107555. Epub 2023 Oct 4.
10
Utilization of model-agnostic explainable artificial intelligence frameworks in oncology: a narrative review.模型无关可解释人工智能框架在肿瘤学中的应用:一项叙述性综述
Transl Cancer Res. 2022 Oct;11(10):3853-3868. doi: 10.21037/tcr-22-1626.

引用本文的文献

1
Evolution and integration of artificial intelligence across the cancer continuum in women: advances in risk assessment, prevention, and early detection.人工智能在女性癌症全程中的发展与整合:风险评估、预防及早期检测的进展
Cancer Causes Control. 2025 Aug 20. doi: 10.1007/s10552-025-02048-6.
2
Deep Learning Applications in Clinical Cancer Detection: A Review of Implementation Challenges and Solutions.深度学习在临床癌症检测中的应用:实施挑战与解决方案综述
Mayo Clin Proc Digit Health. 2025 Jul 18;3(3):100253. doi: 10.1016/j.mcpdig.2025.100253. eCollection 2025 Sep.
3
Histological Image Classification Between Follicular Lymphoma and Reactive Lymphoid Tissue Using Deep Learning and Explainable Artificial Intelligence (XAI).

本文引用的文献

1
Building trust in deep learning-based immune response predictors with interpretable explanations.基于可解释性解释构建深度学习免疫反应预测器的信任。
Commun Biol. 2024 Mar 6;7(1):279. doi: 10.1038/s42003-024-05968-2.
2
Machine learning and XAI approaches highlight the strong connection between and pollutants and Alzheimer's disease.机器学习和 XAI 方法强调了 和 污染物与阿尔茨海默病之间的紧密联系。
Sci Rep. 2024 Mar 5;14(1):5385. doi: 10.1038/s41598-024-55439-1.
3
Dermatologist-like explainable AI enhances trust and confidence in diagnosing melanoma.
使用深度学习和可解释人工智能(XAI)对滤泡性淋巴瘤和反应性淋巴组织进行组织学图像分类
Cancers (Basel). 2025 Jul 22;17(15):2428. doi: 10.3390/cancers17152428.
4
Detection of breast cancer using machine learning and explainable artificial intelligence.利用机器学习和可解释人工智能检测乳腺癌。
Sci Rep. 2025 Jul 24;15(1):26931. doi: 10.1038/s41598-025-12644-w.
5
ALOXE3 expression predicts poor prognosis and modulates immune infiltration in colon adenocarcinoma.ALOXE3表达预示着结肠腺癌的预后不良并调节免疫浸润。
World J Surg Oncol. 2025 Jul 23;23(1):296. doi: 10.1186/s12957-025-03939-3.
6
Gradual poisoning of a chest x-ray convolutional neural network with an adversarial attack and AI explainability methods.通过对抗攻击和人工智能可解释性方法对胸部X光卷积神经网络进行渐进式中毒攻击。
Sci Rep. 2025 Jul 1;15(1):21779. doi: 10.1038/s41598-025-02294-3.
7
Breast Cancer Detection via Multi-Tiered Self-Contrastive Learning in Microwave Radiometric Imaging.通过微波辐射成像中的多层自对比学习进行乳腺癌检测
Diagnostics (Basel). 2025 Feb 25;15(5):549. doi: 10.3390/diagnostics15050549.
8
Editorial: Artificial intelligence applications for cancer diagnosis in radiology.社论:人工智能在放射学癌症诊断中的应用
Front Radiol. 2025 Jan 29;5:1493783. doi: 10.3389/fradi.2025.1493783. eCollection 2025.
9
Machine learning for predicting neoadjuvant chemotherapy effectiveness using ultrasound radiomics features and routine clinical data of patients with breast cancer.利用超声影像组学特征和乳腺癌患者的常规临床数据进行机器学习以预测新辅助化疗疗效
Front Oncol. 2025 Jan 14;14:1485681. doi: 10.3389/fonc.2024.1485681. eCollection 2024.
10
Predictive analytics in bronchopulmonary dysplasia: past, present, and future.支气管肺发育不良的预测分析:过去、现在与未来。
Front Pediatr. 2024 Nov 20;12:1483940. doi: 10.3389/fped.2024.1483940. eCollection 2024.
皮肤科医生般的可解释人工智能增强了对黑色素瘤诊断的信任和信心。
Nat Commun. 2024 Jan 15;15(1):524. doi: 10.1038/s41467-023-43095-4.
4
Perceptions of Data Set Experts on Important Characteristics of Health Data Sets Ready for Machine Learning: A Qualitative Study.数据专家对适合机器学习的健康数据集的重要特征的看法:一项定性研究。
JAMA Netw Open. 2023 Dec 1;6(12):e2345892. doi: 10.1001/jamanetworkopen.2023.45892.
5
BI-RADS-NET-V2: A Composite Multi-Task Neural Network for Computer-Aided Diagnosis of Breast Cancer in Ultrasound Images With Semantic and Quantitative Explanations.BI-RADS-NET-V2:一种用于超声图像中乳腺癌计算机辅助诊断的复合多任务神经网络,具有语义和定量解释。
IEEE Access. 2023;11:79480-79494. doi: 10.1109/access.2023.3298569. Epub 2023 Jul 25.
6
Deep learning classification of deep ultraviolet fluorescence images toward intra-operative margin assessment in breast cancer.基于深度学习的深紫外荧光图像分类用于乳腺癌术中切缘评估
Front Oncol. 2023 Jun 16;13:1179025. doi: 10.3389/fonc.2023.1179025. eCollection 2023.
7
An Explainable AI Approach for Breast Cancer Metastasis Prediction Based on Clinicopathological Data.一种基于临床病理数据的乳腺癌转移预测的可解释人工智能方法。
IEEE Trans Biomed Eng. 2023 Dec;70(12):3321-3329. doi: 10.1109/TBME.2023.3282840. Epub 2023 Nov 21.
8
A Hybrid Algorithm of ML and XAI to Prevent Breast Cancer: A Strategy to Support Decision Making.一种用于预防乳腺癌的机器学习与可解释人工智能混合算法:一种支持决策的策略
Cancers (Basel). 2023 Apr 25;15(9):2443. doi: 10.3390/cancers15092443.
9
The prediction of distant metastasis risk for male breast cancer patients based on an interpretable machine learning model.基于可解释机器学习模型的男性乳腺癌患者远处转移风险预测。
BMC Med Inform Decis Mak. 2023 Apr 21;23(1):74. doi: 10.1186/s12911-023-02166-8.
10
DeepMiCa: Automatic segmentation and classification of breast MIcroCAlcifications from mammograms.DeepMiCa:从乳房 X 光片中自动分割和分类乳腺微钙化
Comput Methods Programs Biomed. 2023 Jun;235:107483. doi: 10.1016/j.cmpb.2023.107483. Epub 2023 Mar 31.