文献检索文档翻译深度研究
Suppr Zotero 插件Zotero 插件
邀请有礼套餐&价格历史记录

新学期,新优惠

限时优惠:9月1日-9月22日

30天高级会员仅需29元

1天体验卡首发特惠仅需5.99元

了解详情
不再提醒
插件&应用
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
高级版
套餐订阅购买积分包
AI 工具
文献检索文档翻译深度研究
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2025

谁害怕黑箱算法?论对医学人工智能信任的认识论与伦理基础。

Who is afraid of black box algorithms? On the epistemological and ethical basis of trust in medical AI.

作者信息

Durán Juan Manuel, Jongsma Karin Rolanda

机构信息

Technology, Policy and Management, Delft University of Technology, Delft, The Netherlands

Julius Center, University Medical Center Utrecht, Utrecht University, Utrecht, The Netherlands.

出版信息

J Med Ethics. 2021 Mar 18. doi: 10.1136/medethics-2020-106820.


DOI:10.1136/medethics-2020-106820
PMID:33737318
Abstract

The use of black box algorithms in medicine has raised scholarly concerns due to their opaqueness and lack of trustworthiness. Concerns about potential bias, accountability and responsibility, patient autonomy and compromised trust transpire with black box algorithms. These worries connect epistemic concerns with normative issues. In this paper, we outline that black box algorithms are less problematic for epistemic reasons than many scholars seem to believe. By outlining that more transparency in algorithms is not always necessary, and by explaining that computational processes are indeed methodologically opaque to humans, we argue that the reliability of algorithms provides reasons for trusting the outcomes of medical artificial intelligence (AI). To this end, we explain how , which does not require transparency and supports the reliability of algorithms, justifies the belief that results of medical AI are to be trusted. We also argue that several ethical concerns remain with black box algorithms, even when the results are trustworthy. Having justified knowledge from reliable indicators is, therefore, necessary but not sufficient for normatively justifying physicians to act. This means that deliberation about the results of reliable algorithms is required to find out what is a desirable action. Thus understood, we argue that such challenges should not dismiss the use of black box algorithms altogether but should inform the way in which these algorithms are designed and implemented. When physicians are trained to acquire the necessary skills and expertise, and collaborate with medical informatics and data scientists, black box algorithms can contribute to improving medical care.

摘要

医学中黑箱算法的使用因其不透明性和缺乏可信度而引发了学术关注。黑箱算法引发了人们对潜在偏见、问责与责任、患者自主性以及信任受损等问题的担忧。这些担忧将认知问题与规范性问题联系了起来。在本文中,我们概述了黑箱算法在认知层面上的问题并不像许多学者认为的那么严重。通过指出算法并非总是需要更高的透明度,并解释计算过程对人类来说在方法上确实是不透明的,我们认为算法的可靠性为信任医学人工智能(AI)的结果提供了理由。为此,我们解释了贝叶斯推理如何在不需要透明度且支持算法可靠性的情况下,为相信医学人工智能的结果值得信任提供了依据。我们还认为,即使结果是值得信赖的,黑箱算法仍存在一些伦理问题。因此,从可靠指标中获得合理知识对于从规范角度证明医生的行为是必要的,但并不充分。这意味着需要对可靠算法的结果进行审议,以确定什么是可取的行动。照此理解,我们认为此类挑战不应完全摒弃黑箱算法的使用,而应影响这些算法的设计和实施方式。当医生接受培训以获得必要的技能和专业知识,并与医学信息学和数据科学家合作时,黑箱算法可以有助于改善医疗护理。

相似文献

[1]
Who is afraid of black box algorithms? On the epistemological and ethical basis of trust in medical AI.

J Med Ethics. 2021-3-18

[2]
Design publicity of black box algorithms: a support to the epistemic and ethical justifications of medical AI systems.

J Med Ethics. 2022-7

[3]
The Medicine Revolution Through Artificial Intelligence: Ethical Challenges of Machine Learning Algorithms in Decision-Making.

Cureus. 2024-9-14

[4]
Ethics and governance of trustworthy medical artificial intelligence.

BMC Med Inform Decis Mak. 2023-1-13

[5]
"I don't think people are ready to trust these algorithms at face value": trust and the use of machine learning algorithms in the diagnosis of rare disease.

BMC Med Ethics. 2022-11-16

[6]
Coming to Terms with the Black Box Problem: How to Justify AI Systems in Health Care.

Hastings Cent Rep. 2021-7

[7]
From understanding to justifying: Computational reliabilism for AI-based forensic evidence evaluation.

Forensic Sci Int Synerg. 2024-8-30

[8]
Transparency of Health Informatics Processes as the Condition of Healthcare Professionals' and Patients' Trust and Adoption: the Rise of Ethical Requirements.

Yearb Med Inform. 2020-8

[9]
The three ghosts of medical AI: Can the black-box present deliver?

Artif Intell Med. 2022-2

[10]
How the EU AI Act Seeks to Establish an Epistemic Environment of Trust.

Asian Bioeth Rev. 2024-6-24

引用本文的文献

[1]
Trust in Medical AI: The Case of mHealth Diabetes Apps.

J Eval Clin Pract. 2025-8

[2]
Research Progress in Artificial Intelligence for Central Serous Chorioretinopathy: A Systematic Review.

Ophthalmol Ther. 2025-7-22

[3]
Exploring the determinants of AIGC usage intention based on the extended AIDUA model: a multi-group structural equation modeling analysis.

Front Psychol. 2025-5-21

[4]
Artificial intelligence for contextual well-being: Protocol for an exploratory sequential mixed methods study with medical students as a social microcosm.

PLoS One. 2025-5-28

[5]
Integrating Artificial Intelligence in Orthopedic Care: Advancements in Bone Care and Future Directions.

Bioengineering (Basel). 2025-5-13

[6]
Should Physicians Take the Rap? Normative Analysis of Clinician Perspectives on Responsible Use of 'Black Box' AI Tools.

AJOB Empir Bioeth. 2025-5-12

[7]
Toward transparency: Implications and future directions of artificial intelligence prediction model reporting in healthcare.

Surg Neurol Int. 2025-4-11

[8]
The ethics of using artificial intelligence in scientific research: new guidance needed for a new tool.

AI Ethics. 2025-4

[9]
What Is the Role of Explainability in Medical Artificial Intelligence? A Case-Based Approach.

Bioengineering (Basel). 2025-4-2

[10]
The Impact of Medical Explainable Artificial Intelligence on Nurses' Innovation Behaviour: A Structural Equation Modelling Approach.

J Nurs Manag. 2024-9-26

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

推荐工具

医学文档翻译智能文献检索