• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

生物医学可解释人工智能研究的历史视角。

A historical perspective of biomedical explainable AI research.

作者信息

Malinverno Luca, Barros Vesna, Ghisoni Francesco, Visonà Giovanni, Kern Roman, Nickel Philip J, Ventura Barbara Elvira, Šimić Ilija, Stryeck Sarah, Manni Francesca, Ferri Cesar, Jean-Quartier Claire, Genga Laura, Schweikert Gabriele, Lovrić Mario, Rosen-Zvi Michal

机构信息

Porini SRL, Via Cavour, 222074 Lomazzo, Italy.

AI for Accelerated Healthcare & Life Sciences Discovery, IBM R&D Laboratories, University of Haifa Campus, Mount Carmel, Haifa 3498825, Israel.

出版信息

Patterns (N Y). 2023 Sep 8;4(9):100830. doi: 10.1016/j.patter.2023.100830.

DOI:10.1016/j.patter.2023.100830
PMID:37720333
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC10500028/
Abstract

The black-box nature of most artificial intelligence (AI) models encourages the development of explainability methods to engender trust into the AI decision-making process. Such methods can be broadly categorized into two main types: post hoc explanations and inherently interpretable algorithms. We aimed at analyzing the possible associations between COVID-19 and the push of explainable AI (XAI) to the forefront of biomedical research. We automatically extracted from the PubMed database biomedical XAI studies related to concepts of causality or explainability and manually labeled 1,603 papers with respect to XAI categories. To compare the trends pre- and post-COVID-19, we fit a change point detection model and evaluated significant changes in publication rates. We show that the advent of COVID-19 in the beginning of 2020 could be the driving factor behind an increased focus concerning XAI, playing a crucial role in accelerating an already evolving trend. Finally, we present a discussion with future societal use and impact of XAI technologies and potential future directions for those who pursue fostering clinical trust with interpretable machine learning models.

摘要

大多数人工智能(AI)模型的黑箱性质促使可解释性方法的发展,以增强人们对AI决策过程的信任。此类方法大致可分为两种主要类型:事后解释和本质上可解释的算法。我们旨在分析新型冠状病毒肺炎(COVID-19)与将可解释人工智能(XAI)推向生物医学研究前沿之间可能存在的关联。我们从PubMed数据库中自动提取了与因果关系或可解释性概念相关的生物医学XAI研究,并针对XAI类别手动标注了1603篇论文。为了比较COVID-19前后的趋势,我们拟合了一个变化点检测模型,并评估了发表率的显著变化。我们表明,2020年初COVID-19的出现可能是XAI关注度增加背后的驱动因素,在加速一个已经在发展的趋势方面发挥了关键作用。最后,我们对XAI技术的未来社会应用和影响以及那些致力于通过可解释机器学习模型增强临床信任的人未来可能的方向进行了讨论。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a345/10500028/e7c1fa908893/gr3.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a345/10500028/2852886d46b4/fx1.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a345/10500028/ad454c354246/gr1.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a345/10500028/2ed0d1b68bb1/gr2.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a345/10500028/e7c1fa908893/gr3.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a345/10500028/2852886d46b4/fx1.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a345/10500028/ad454c354246/gr1.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a345/10500028/2ed0d1b68bb1/gr2.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a345/10500028/e7c1fa908893/gr3.jpg

相似文献

1
A historical perspective of biomedical explainable AI research.生物医学可解释人工智能研究的历史视角。
Patterns (N Y). 2023 Sep 8;4(9):100830. doi: 10.1016/j.patter.2023.100830.
2
Exploring the Applications of Explainability in Wearable Data Analytics: Systematic Literature Review.探索可解释性在可穿戴数据分析中的应用:系统文献综述
J Med Internet Res. 2024 Dec 24;26:e53863. doi: 10.2196/53863.
3
How Explainable Artificial Intelligence Can Increase or Decrease Clinicians' Trust in AI Applications in Health Care: Systematic Review.可解释人工智能如何增加或降低临床医生对医疗保健中人工智能应用的信任:系统评价
JMIR AI. 2024 Oct 30;3:e53207. doi: 10.2196/53207.
4
Applications of Explainable Artificial Intelligence in Diagnosis and Surgery.可解释人工智能在诊断与手术中的应用。
Diagnostics (Basel). 2022 Jan 19;12(2):237. doi: 10.3390/diagnostics12020237.
5
Model-agnostic explainable artificial intelligence tools for severity prediction and symptom analysis on Indian COVID-19 data.用于印度新冠疫情数据严重程度预测和症状分析的模型无关可解释人工智能工具。
Front Artif Intell. 2023 Dec 4;6:1272506. doi: 10.3389/frai.2023.1272506. eCollection 2023.
6
Explainability and white box in drug discovery.药物发现中的可解释性和白盒。
Chem Biol Drug Des. 2023 Jul;102(1):217-233. doi: 10.1111/cbdd.14262. Epub 2023 Apr 27.
7
Systematic literature review on the application of explainable artificial intelligence in palliative care studies.关于可解释人工智能在姑息治疗研究中应用的系统文献综述。
Int J Med Inform. 2025 Aug;200:105914. doi: 10.1016/j.ijmedinf.2025.105914. Epub 2025 Apr 8.
8
Current methods in explainable artificial intelligence and future prospects for integrative physiology.可解释人工智能的当前方法与整合生理学的未来前景。
Pflugers Arch. 2025 Apr;477(4):513-529. doi: 10.1007/s00424-025-03067-7. Epub 2025 Feb 25.
9
Explainable AI for Bioinformatics: Methods, Tools and Applications.可解释人工智能在生物信息学中的应用:方法、工具与应用。
Brief Bioinform. 2023 Sep 20;24(5). doi: 10.1093/bib/bbad236.
10
A Survey on Medical Explainable AI (XAI): Recent Progress, Explainability Approach, Human Interaction and Scoring System.医学可解释人工智能(XAI)调查:最新进展、可解释性方法、人机交互和评分系统。
Sensors (Basel). 2022 Oct 21;22(20):8068. doi: 10.3390/s22208068.

引用本文的文献

1
Preoperative kidney tumor risk estimation with AI: From logistic regression to transformer.使用人工智能进行术前肾肿瘤风险评估:从逻辑斯谛回归到Transformer
PLoS One. 2025 May 30;20(5):e0323240. doi: 10.1371/journal.pone.0323240. eCollection 2025.
2
Peri-operative anti-inflammatory drug use and seizure recurrence after resective epilepsy surgery: Target trials emulation.切除性癫痫手术后围手术期抗炎药物的使用与癫痫复发:目标试验模拟
iScience. 2025 Feb 28;28(4):112124. doi: 10.1016/j.isci.2025.112124. eCollection 2025 Apr 18.
3
Survival machine learning model of T1 colorectal postoperative recurrence after endoscopic resection and surgical operation: a retrospective cohort study.

本文引用的文献

1
Performance of ChatGPT on USMLE: Potential for AI-assisted medical education using large language models.ChatGPT在美国医师执照考试中的表现:使用大语言模型进行人工智能辅助医学教育的潜力。
PLOS Digit Health. 2023 Feb 9;2(2):e0000198. doi: 10.1371/journal.pdig.0000198. eCollection 2023 Feb.
2
Application of explainable artificial intelligence for healthcare: A systematic review of the last decade (2011-2022).可解释人工智能在医疗保健中的应用:过去十年(2011-2022 年)的系统回顾。
Comput Methods Programs Biomed. 2022 Nov;226:107161. doi: 10.1016/j.cmpb.2022.107161. Epub 2022 Sep 27.
3
A critical overview of current progress for COVID-19: development of vaccines, antiviral drugs, and therapeutic antibodies.
内镜切除和手术治疗后T1期结直肠癌术后复发的生存机器学习模型:一项回顾性队列研究
BMC Cancer. 2025 Feb 14;25(1):262. doi: 10.1186/s12885-025-13663-6.
4
Harnessing the AI/ML in Drug and Biological Products Discovery and Development: The Regulatory Perspective.从监管角度看人工智能/机器学习在药物和生物制品研发中的应用
Pharmaceuticals (Basel). 2025 Jan 3;18(1):47. doi: 10.3390/ph18010047.
5
Contrasting rule and machine learning based digital self triage systems in the USA.对比美国基于规则和机器学习的数字自我分诊系统。
NPJ Digit Med. 2024 Dec 27;7(1):381. doi: 10.1038/s41746-024-01367-3.
6
Revolutionizing Molecular Design for Innovative Therapeutic Applications through Artificial Intelligence.通过人工智能为创新治疗应用彻底改变分子设计。
Molecules. 2024 Sep 29;29(19):4626. doi: 10.3390/molecules29194626.
7
Explainable AI-prioritized plasma and fecal metabolites in inflammatory bowel disease and their dietary associations.炎症性肠病中可解释人工智能优先考虑的血浆和粪便代谢物及其饮食关联
iScience. 2024 Jun 17;27(7):110298. doi: 10.1016/j.isci.2024.110298. eCollection 2024 Jul 19.
8
A Study on the Robustness and Stability of Explainable Deep Learning in an Imbalanced Setting: The Exploration of the Conformational Space of G Protein-Coupled Receptors.在不平衡环境下可解释深度学习的稳健性和稳定性研究:G 蛋白偶联受体构象空间的探索。
Int J Mol Sci. 2024 Jun 14;25(12):6572. doi: 10.3390/ijms25126572.
9
Artificial intelligence in liver cancer research: a scientometrics analysis of trends and topics.人工智能在肝癌研究中的应用:趋势与主题的科学计量学分析
Front Oncol. 2024 Feb 28;14:1355454. doi: 10.3389/fonc.2024.1355454. eCollection 2024.
10
From Pixels to Diagnosis: Algorithmic Analysis of Clinical Oral Photos for Early Detection of Oral Squamous Cell Carcinoma.从像素到诊断:用于早期检测口腔鳞状细胞癌的临床口腔照片算法分析
Cancers (Basel). 2024 Feb 29;16(5):1019. doi: 10.3390/cancers16051019.
对 COVID-19 目前进展的批判性综述:疫苗、抗病毒药物和治疗性抗体的开发。
J Biomed Sci. 2022 Sep 12;29(1):68. doi: 10.1186/s12929-022-00852-9.
4
AI-SCoRE (artificial intelligence-SARS CoV2 risk evaluation): a fast, objective and fully automated platform to predict the outcome in COVID-19 patients.AI-SCoRE(人工智能-SARS-CoV2 风险评估):一个快速、客观和完全自动化的平台,用于预测 COVID-19 患者的结局。
Radiol Med. 2022 Sep;127(9):960-972. doi: 10.1007/s11547-022-01518-0. Epub 2022 Aug 29.
5
Causal machine learning for healthcare and precision medicine.用于医疗保健和精准医学的因果机器学习。
R Soc Open Sci. 2022 Aug 3;9(8):220638. doi: 10.1098/rsos.220638. eCollection 2022 Aug.
6
ProtGPT2 is a deep unsupervised language model for protein design.ProtGPT2 是一个用于蛋白质设计的深度无监督语言模型。
Nat Commun. 2022 Jul 27;13(1):4348. doi: 10.1038/s41467-022-32007-7.
7
Stop Explaining Black Box Machine Learning Models for High Stakes Decisions and Use Interpretable Models Instead.停止为高风险决策解释黑箱机器学习模型,转而使用可解释模型。
Nat Mach Intell. 2019 May;1(5):206-215. doi: 10.1038/s42256-019-0048-x. Epub 2019 May 13.
8
PNEUMONIA DETECTION ON CHEST X-RAY USING RADIOMIC FEATURES AND CONTRASTIVE LEARNING.基于放射组学特征和对比学习的胸部X光片肺炎检测
Proc IEEE Int Symp Biomed Imaging. 2021 Apr;2021:247-251. doi: 10.1109/isbi48211.2021.9433853. Epub 2021 May 25.
9
AI in small-molecule drug discovery: a coming wave?人工智能在小分子药物发现中的应用:即将到来的浪潮?
Nat Rev Drug Discov. 2022 Mar;21(3):175-176. doi: 10.1038/d41573-022-00025-1.
10
AI in health and medicine.人工智能在医疗中的应用。
Nat Med. 2022 Jan;28(1):31-38. doi: 10.1038/s41591-021-01614-0. Epub 2022 Jan 20.