• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

生命科学、医疗保健和医学信息学中的以人为本的可解释性。

Human-centered explainability for life sciences, healthcare, and medical informatics.

作者信息

Dey Sanjoy, Chakraborty Prithwish, Kwon Bum Chul, Dhurandhar Amit, Ghalwash Mohamed, Suarez Saiz Fernando J, Ng Kenney, Sow Daby, Varshney Kush R, Meyer Pablo

机构信息

Center for Computational Health, IBM Thomas J. Watson Research Center, Yorktown Heights, NY 10598, USA.

IBM Research AI, IBM Thomas J. Watson Research Center, Yorktown Heights, NY 10598, USA.

出版信息

Patterns (N Y). 2022 May 13;3(5):100493. doi: 10.1016/j.patter.2022.100493.

DOI:10.1016/j.patter.2022.100493
PMID:35607616
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC9122967/
Abstract

Rapid advances in artificial intelligence (AI) and availability of biological, medical, and healthcare data have enabled the development of a wide variety of models. Significant success has been achieved in a wide range of fields, such as genomics, protein folding, disease diagnosis, imaging, and clinical tasks. Although widely used, the inherent opacity of deep AI models has brought criticism from the research field and little adoption in clinical practice. Concurrently, there has been a significant amount of research focused on making such methods more interpretable, reviewed here, but inherent critiques of such explainability in AI (XAI), its requirements, and concerns with fairness/robustness have hampered their real-world adoption. We here discuss how user-driven XAI can be made more useful for different healthcare stakeholders through the definition of three key personas-data scientists, clinical researchers, and clinicians-and present an overview of how different XAI approaches can address their needs. For illustration, we also walk through several research and clinical examples that take advantage of XAI open-source tools, including those that help enhance the explanation of the results through visualization. This perspective thus aims to provide a guidance tool for developing explainability solutions for healthcare by empowering both subject matter experts, providing them with a survey of available tools, and explainability developers, by providing examples of how such methods can influence in practice adoption of solutions.

摘要

人工智能(AI)的迅速发展以及生物、医学和医疗保健数据的可得性推动了各种各样模型的开发。在基因组学、蛋白质折叠、疾病诊断、成像和临床任务等广泛领域都取得了重大成功。尽管深度学习AI模型被广泛使用,但其固有的不透明性引发了研究领域的批评,并且在临床实践中很少被采用。与此同时,大量研究致力于使这些方法更具可解释性,本文对此进行了综述,但对AI中的可解释性(XAI)的固有批评、其要求以及对公平性/稳健性的担忧阻碍了它们在现实世界中的应用。我们在此讨论如何通过定义三个关键角色——数据科学家、临床研究人员和临床医生,使用户驱动的XAI对不同的医疗保健利益相关者更有用,并概述不同的XAI方法如何满足他们的需求。为了说明这一点,我们还介绍了几个利用XAI开源工具的研究和临床示例,包括那些通过可视化帮助增强结果解释的工具。因此,这一观点旨在为医疗保健领域开发可解释性解决方案提供一种指导工具,通过为主题专家提供可用工具的调查来赋予他们权力,并通过提供此类方法如何影响解决方案在实践中的采用的示例来为可解释性开发者提供帮助。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e7f3/9122967/9b5d6a9d4c7a/gr6.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e7f3/9122967/e1edbed8a303/gr1.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e7f3/9122967/f8217eb2f6d2/gr2.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e7f3/9122967/1173608d9780/gr3.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e7f3/9122967/75c0d4d3e17b/gr4.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e7f3/9122967/36bce631a2e5/gr5.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e7f3/9122967/9b5d6a9d4c7a/gr6.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e7f3/9122967/e1edbed8a303/gr1.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e7f3/9122967/f8217eb2f6d2/gr2.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e7f3/9122967/1173608d9780/gr3.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e7f3/9122967/75c0d4d3e17b/gr4.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e7f3/9122967/36bce631a2e5/gr5.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e7f3/9122967/9b5d6a9d4c7a/gr6.jpg

相似文献

1
Human-centered explainability for life sciences, healthcare, and medical informatics.生命科学、医疗保健和医学信息学中的以人为本的可解释性。
Patterns (N Y). 2022 May 13;3(5):100493. doi: 10.1016/j.patter.2022.100493.
2
Explainable AI for Bioinformatics: Methods, Tools and Applications.可解释人工智能在生物信息学中的应用:方法、工具与应用。
Brief Bioinform. 2023 Sep 20;24(5). doi: 10.1093/bib/bbad236.
3
Explainability for artificial intelligence in healthcare: a multidisciplinary perspective.人工智能在医疗保健中的可解释性:多学科视角。
BMC Med Inform Decis Mak. 2020 Nov 30;20(1):310. doi: 10.1186/s12911-020-01332-6.
4
Applications of Explainable Artificial Intelligence in Diagnosis and Surgery.可解释人工智能在诊断与手术中的应用。
Diagnostics (Basel). 2022 Jan 19;12(2):237. doi: 10.3390/diagnostics12020237.
5
Explainable AI in medical imaging: An overview for clinical practitioners - Beyond saliency-based XAI approaches.医学成像中的可解释人工智能:临床从业者概述——超越基于显著性的可解释人工智能方法
Eur J Radiol. 2023 May;162:110786. doi: 10.1016/j.ejrad.2023.110786. Epub 2023 Mar 20.
6
A Survey on Medical Explainable AI (XAI): Recent Progress, Explainability Approach, Human Interaction and Scoring System.医学可解释人工智能(XAI)调查:最新进展、可解释性方法、人机交互和评分系统。
Sensors (Basel). 2022 Oct 21;22(20):8068. doi: 10.3390/s22208068.
7
Explanatory pragmatism: a context-sensitive framework for explainable medical AI.解释性实用主义:一个用于可解释医学人工智能的上下文敏感框架。
Ethics Inf Technol. 2022;24(1):13. doi: 10.1007/s10676-022-09632-3. Epub 2022 Feb 28.
8
The ethical requirement of explainability for AI-DSS in healthcare: a systematic review of reasons.医疗保健中人工智能决策支持系统(AI-DSS)可解释性的伦理要求:原因的系统综述
BMC Med Ethics. 2024 Oct 1;25(1):104. doi: 10.1186/s12910-024-01103-2.
9
Explainability and causability in digital pathology.数字病理学中的可解释性和可归因性。
J Pathol Clin Res. 2023 Jul;9(4):251-260. doi: 10.1002/cjp2.322. Epub 2023 Apr 12.
10
Fairness of artificial intelligence in healthcare: review and recommendations.人工智能在医疗保健中的公平性:综述与建议。
Jpn J Radiol. 2024 Jan;42(1):3-15. doi: 10.1007/s11604-023-01474-3. Epub 2023 Aug 4.

引用本文的文献

1
Explainable Machine Learning for the Early Clinical Detection of Ovarian Cancer Using Contrastive Explanations.使用对比解释的可解释机器学习用于卵巢癌的早期临床检测
J Clin Med. 2025 Sep 2;14(17):6201. doi: 10.3390/jcm14176201.
2
Assessment of Performance, Interpretability, and Explainability in Artificial Intelligence-Based Health Technologies: What Healthcare Stakeholders Need to Know.基于人工智能的健康技术的性能、可解释性和可说明性评估:医疗保健利益相关者需要了解的内容。
Mayo Clin Proc Digit Health. 2023 Apr 21;1(2):120-138. doi: 10.1016/j.mcpdig.2023.02.004. eCollection 2023 Jun.
3
The application of artificial intelligence in diabetic retinopathy: progress and prospects.

本文引用的文献

1
Stop Explaining Black Box Machine Learning Models for High Stakes Decisions and Use Interpretable Models Instead.停止为高风险决策解释黑箱机器学习模型,转而使用可解释模型。
Nat Mach Intell. 2019 May;1(5):206-215. doi: 10.1038/s42256-019-0048-x. Epub 2019 May 13.
2
Impact of Clinical and Genomic Factors on COVID-19 Disease Severity.临床和基因组因素对 COVID-19 疾病严重程度的影响。
AMIA Annu Symp Proc. 2022 Feb 21;2021:378-387. eCollection 2021.
3
Comparison of machine learning and deep learning for view identification from cardiac magnetic resonance images.
人工智能在糖尿病视网膜病变中的应用:进展与展望。
Front Cell Dev Biol. 2024 Oct 25;12:1473176. doi: 10.3389/fcell.2024.1473176. eCollection 2024.
4
Use of Digitalisation and Machine Learning Techniques in Therapeutic Intervention at Early Ages: Supervised and Unsupervised Analysis.数字化与机器学习技术在早期治疗干预中的应用:监督式与非监督式分析
Children (Basel). 2024 Mar 22;11(4):381. doi: 10.3390/children11040381.
5
From Machine Learning to Patient Outcomes: A Comprehensive Review of AI in Pancreatic Cancer.从机器学习到患者预后:胰腺癌人工智能的全面综述
Diagnostics (Basel). 2024 Jan 12;14(2):174. doi: 10.3390/diagnostics14020174.
6
Text-based predictions of COVID-19 diagnosis from self-reported chemosensory descriptions.基于自我报告的化学感觉描述对COVID-19诊断进行文本预测。
Commun Med (Lond). 2023 Jul 27;3(1):104. doi: 10.1038/s43856-023-00334-5.
机器学习与深度学习在心脏磁共振图像视图识别中的比较。
Clin Imaging. 2022 Feb;82:121-126. doi: 10.1016/j.clinimag.2021.11.013. Epub 2021 Nov 19.
4
The false hope of current approaches to explainable artificial intelligence in health care.当前医疗保健中可解释人工智能方法的虚假希望。
Lancet Digit Health. 2021 Nov;3(11):e745-e750. doi: 10.1016/S2589-7500(21)00208-9.
5
Model comparison via simplicial complexes and persistent homology.通过单纯复形和持久同调进行模型比较。
R Soc Open Sci. 2021 Oct 13;8(10):211361. doi: 10.1098/rsos.211361. eCollection 2021 Oct.
6
Beware explanations from AI in health care.在医疗保健领域,要警惕来自人工智能的解释。
Science. 2021 Jul 16;373(6552):284-286. doi: 10.1126/science.abg1834.
7
Making machine learning trustworthy.让机器学习值得信赖。
Science. 2021 Aug 13;373(6556):743-744. doi: 10.1126/science.abi5052.
8
Comparison of machine and deep learning for the classification of cervical cancer based on cervicography images.基于宫颈图像的宫颈癌分类的机器和深度学习比较。
Sci Rep. 2021 Aug 9;11(1):16143. doi: 10.1038/s41598-021-95748-3.
9
Highly accurate protein structure prediction with AlphaFold.利用 AlphaFold 进行高精度蛋白质结构预测。
Nature. 2021 Aug;596(7873):583-589. doi: 10.1038/s41586-021-03819-2. Epub 2021 Jul 15.
10
Advances in systems biology modeling: 10 years of crowdsourcing DREAM challenges.系统生物学建模的进展:众包DREAM挑战的十年。
Cell Syst. 2021 Jun 16;12(6):636-653. doi: 10.1016/j.cels.2021.05.015.