Durán Juan Manuel, Jongsma Karin Rolanda
Technology, Policy and Management, Delft University of Technology, Delft, The Netherlands
Julius Center, University Medical Center Utrecht, Utrecht University, Utrecht, The Netherlands.
J Med Ethics. 2021 Mar 18. doi: 10.1136/medethics-2020-106820.
The use of black box algorithms in medicine has raised scholarly concerns due to their opaqueness and lack of trustworthiness. Concerns about potential bias, accountability and responsibility, patient autonomy and compromised trust transpire with black box algorithms. These worries connect epistemic concerns with normative issues. In this paper, we outline that black box algorithms are less problematic for epistemic reasons than many scholars seem to believe. By outlining that more transparency in algorithms is not always necessary, and by explaining that computational processes are indeed methodologically opaque to humans, we argue that the reliability of algorithms provides reasons for trusting the outcomes of medical artificial intelligence (AI). To this end, we explain how , which does not require transparency and supports the reliability of algorithms, justifies the belief that results of medical AI are to be trusted. We also argue that several ethical concerns remain with black box algorithms, even when the results are trustworthy. Having justified knowledge from reliable indicators is, therefore, necessary but not sufficient for normatively justifying physicians to act. This means that deliberation about the results of reliable algorithms is required to find out what is a desirable action. Thus understood, we argue that such challenges should not dismiss the use of black box algorithms altogether but should inform the way in which these algorithms are designed and implemented. When physicians are trained to acquire the necessary skills and expertise, and collaborate with medical informatics and data scientists, black box algorithms can contribute to improving medical care.
医学中黑箱算法的使用因其不透明性和缺乏可信度而引发了学术关注。黑箱算法引发了人们对潜在偏见、问责与责任、患者自主性以及信任受损等问题的担忧。这些担忧将认知问题与规范性问题联系了起来。在本文中,我们概述了黑箱算法在认知层面上的问题并不像许多学者认为的那么严重。通过指出算法并非总是需要更高的透明度,并解释计算过程对人类来说在方法上确实是不透明的,我们认为算法的可靠性为信任医学人工智能(AI)的结果提供了理由。为此,我们解释了贝叶斯推理如何在不需要透明度且支持算法可靠性的情况下,为相信医学人工智能的结果值得信任提供了依据。我们还认为,即使结果是值得信赖的,黑箱算法仍存在一些伦理问题。因此,从可靠指标中获得合理知识对于从规范角度证明医生的行为是必要的,但并不充分。这意味着需要对可靠算法的结果进行审议,以确定什么是可取的行动。照此理解,我们认为此类挑战不应完全摒弃黑箱算法的使用,而应影响这些算法的设计和实施方式。当医生接受培训以获得必要的技能和专业知识,并与医学信息学和数据科学家合作时,黑箱算法可以有助于改善医疗护理。
BMC Med Inform Decis Mak. 2023-1-13
Hastings Cent Rep. 2021-7
Forensic Sci Int Synerg. 2024-8-30
Artif Intell Med. 2022-2
Asian Bioeth Rev. 2024-6-24
J Eval Clin Pract. 2025-8
Ophthalmol Ther. 2025-7-22
Bioengineering (Basel). 2025-5-13
Bioengineering (Basel). 2025-4-2