• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

面向医疗保健的可解释、可信赖和合乎道德的机器学习:调查。

Explainable, trustworthy, and ethical machine learning for healthcare: A survey.

机构信息

IHSAN Lab, Information Technology University of the Punjab (ITU), Lahore, Pakistan.

Research Center for Islamic Legislation and Ethics (CILE), College of Islamic Studies, Hamad Bin Khalifa University (HBKU), Doha, Qatar.

出版信息

Comput Biol Med. 2022 Oct;149:106043. doi: 10.1016/j.compbiomed.2022.106043. Epub 2022 Sep 7.

DOI:10.1016/j.compbiomed.2022.106043
PMID:36115302
Abstract

With the advent of machine learning (ML) and deep learning (DL) empowered applications for critical applications like healthcare, the questions about liability, trust, and interpretability of their outputs are raising. The black-box nature of various DL models is a roadblock to clinical utilization. Therefore, to gain the trust of clinicians and patients, we need to provide explanations about the decisions of models. With the promise of enhancing the trust and transparency of black-box models, researchers are in the phase of maturing the field of eXplainable ML (XML). In this paper, we provided a comprehensive review of explainable and interpretable ML techniques for various healthcare applications. Along with highlighting security, safety, and robustness challenges that hinder the trustworthiness of ML, we also discussed the ethical issues arising because of the use of ML/DL for healthcare. We also describe how explainable and trustworthy ML can resolve all these ethical problems. Finally, we elaborate on the limitations of existing approaches and highlight various open research problems that require further development.

摘要

随着机器学习 (ML) 和深度学习 (DL) 技术在医疗等关键领域的应用,关于其输出的责任、信任和可解释性的问题也随之出现。各种 DL 模型的黑盒性质是临床应用的障碍。因此,为了获得临床医生和患者的信任,我们需要对模型的决策提供解释。为了提高黑盒模型的信任和透明度,研究人员正在成熟可解释机器学习 (XML) 领域。在本文中,我们全面回顾了各种医疗保健应用的可解释和可解释性机器学习技术。我们强调了阻碍 ML 可信度的安全、安全和鲁棒性挑战,还讨论了由于将 ML/DL 用于医疗保健而产生的道德问题。我们还描述了可解释和值得信赖的 ML 如何解决所有这些道德问题。最后,我们详细说明了现有方法的局限性,并强调了需要进一步开发的各种开放研究问题。

相似文献

1
Explainable, trustworthy, and ethical machine learning for healthcare: A survey.面向医疗保健的可解释、可信赖和合乎道德的机器学习:调查。
Comput Biol Med. 2022 Oct;149:106043. doi: 10.1016/j.compbiomed.2022.106043. Epub 2022 Sep 7.
2
Opening the Black Box: The Promise and Limitations of Explainable Machine Learning in Cardiology.揭开黑箱:可解释机器学习在心脏病学中的前景与局限。
Can J Cardiol. 2022 Feb;38(2):204-213. doi: 10.1016/j.cjca.2021.09.004. Epub 2021 Sep 14.
3
The enlightening role of explainable artificial intelligence in medical & healthcare domains: A systematic literature review.可解释人工智能在医疗保健领域中的启示作用:系统文献综述。
Comput Biol Med. 2023 Nov;166:107555. doi: 10.1016/j.compbiomed.2023.107555. Epub 2023 Oct 4.
4
Unveiling the black box: A systematic review of Explainable Artificial Intelligence in medical image analysis.揭开黑箱:医学图像分析中可解释人工智能的系统综述。
Comput Struct Biotechnol J. 2024 Aug 12;24:542-560. doi: 10.1016/j.csbj.2024.08.005. eCollection 2024 Dec.
5
Trusting AI made decisions in healthcare by making them explainable.通过让人工智能做出可解释的决策来信任其在医疗保健中的决策。
Sci Prog. 2024 Jul-Sep;107(3):368504241266573. doi: 10.1177/00368504241266573.
6
Explainable deep learning in healthcare: A methodological survey from an attribution view.医疗保健中的可解释深度学习:基于归因视角的方法学综述
WIREs Mech Dis. 2022 May;14(3):e1548. doi: 10.1002/wsbm.1548. Epub 2022 Jan 17.
7
A Survey on Explainable Artificial Intelligence (XAI): Toward Medical XAI.可解释人工智能(XAI)研究综述:迈向医学 XAI
IEEE Trans Neural Netw Learn Syst. 2021 Nov;32(11):4793-4813. doi: 10.1109/TNNLS.2020.3027314. Epub 2021 Oct 27.
8
Validation and interpretation of a multimodal drowsiness detection system using explainable machine learning.使用可解释机器学习验证和解释多模态瞌睡检测系统。
Comput Methods Programs Biomed. 2024 Jan;243:107925. doi: 10.1016/j.cmpb.2023.107925. Epub 2023 Nov 8.
9
Explainable AI for Bioinformatics: Methods, Tools and Applications.可解释人工智能在生物信息学中的应用:方法、工具与应用。
Brief Bioinform. 2023 Sep 20;24(5). doi: 10.1093/bib/bbad236.
10
Call for the responsible artificial intelligence in the healthcare.呼吁在医疗保健中使用负责任的人工智能。
BMJ Health Care Inform. 2023 Dec 21;30(1):e100920. doi: 10.1136/bmjhci-2023-100920.

引用本文的文献

1
Personalized health monitoring using explainable AI: bridging trust in predictive healthcare.使用可解释人工智能的个性化健康监测:弥合对预测性医疗保健的信任差距。
Sci Rep. 2025 Aug 29;15(1):31892. doi: 10.1038/s41598-025-15867-z.
2
Explainable AI-Based Feature Selection Approaches for Raman Spectroscopy.基于可解释人工智能的拉曼光谱特征选择方法
Diagnostics (Basel). 2025 Aug 18;15(16):2063. doi: 10.3390/diagnostics15162063.
3
The status of machine learning in HIV testing in South Africa: a qualitative inquiry with stakeholders in Gauteng province.
机器学习在南非艾滋病毒检测中的现状:对豪登省利益相关者的定性调查。
Front Digit Health. 2025 Aug 1;7:1618781. doi: 10.3389/fdgth.2025.1618781. eCollection 2025.
4
A privacy preserving machine learning framework for medical image analysis using quantized fully connected neural networks with TFHE based inference.一种用于医学图像分析的隐私保护机器学习框架,该框架使用基于TFHE推理的量化全连接神经网络。
Sci Rep. 2025 Jul 30;15(1):27880. doi: 10.1038/s41598-025-07622-1.
5
Applications of Artificial Intelligence and Machine Learning in Prediabetes: A Scoping Review.人工智能和机器学习在糖尿病前期的应用:一项范围综述
J Diabetes Sci Technol. 2025 Jul 8:19322968251351995. doi: 10.1177/19322968251351995.
6
Predicting mechanical ventilation duration in ICU patients: A data-driven machine learning approach for clinical decision-making.预测重症监护病房患者的机械通气持续时间:一种用于临床决策的数据驱动机器学习方法。
Digit Health. 2025 Jun 26;11:20552076251352988. doi: 10.1177/20552076251352988. eCollection 2025 Jan-Dec.
7
Exploring the social dimensions of AI integration in healthcare: a qualitative study of stakeholder views on challenges and opportunities.探索医疗保健领域人工智能集成的社会层面:对利益相关者关于挑战与机遇观点的定性研究
BMJ Open. 2025 Jun 27;15(6):e096208. doi: 10.1136/bmjopen-2024-096208.
8
Subordination by Design: Rethinking Power, Policy, and Autonomy in Perioperative Nursing.设计中的从属关系:重新思考围手术期护理中的权力、政策与自主性
Nurs Inq. 2025 Jul;32(3):e70043. doi: 10.1111/nin.70043.
9
Comparative performance of twelve machine learning models in predicting COVID-19 mortality risk in children: a population-based retrospective cohort study in Brazil.十二种机器学习模型预测儿童新冠病毒疾病死亡率风险的比较性能:巴西一项基于人群的回顾性队列研究
PeerJ Comput Sci. 2025 May 28;11:e2916. doi: 10.7717/peerj-cs.2916. eCollection 2025.
10
Illuminating the black box: Machine learning enhances preoperative prediction in intrahepatic cholangiocarcinoma.揭开黑箱之谜:机器学习提升肝内胆管癌的术前预测能力
World J Gastroenterol. 2025 May 7;31(17):106592. doi: 10.3748/wjg.v31.i17.106592.