Holm Sune
Department of Food and Resource Economics, University of Copenhagen, 1958Frederiksberg C, Denmark.
Camb Q Healthc Ethics. 2023 Jun 9:1-7. doi: 10.1017/S0963180123000294.
When is it justified to use opaque artificial intelligence (AI) output in medical decision-making? Consideration of this question is of central importance for the responsible use of opaque machine learning (ML) models, which have been shown to produce accurate and reliable diagnoses, prognoses, and treatment suggestions in medicine. In this article, I discuss the merits of two answers to the question. According to the Explanation View, clinicians must have access to an explanation of why an output was produced. According to the Validation View, it is sufficient that the AI system has been validated using established standards for safety and reliability. I defend the Explanation View against two lines of criticism, and I argue that within the framework of evidence-based medicine mere validation seems insufficient for the use of AI output. I end by characterizing the epistemic responsibility of clinicians and point out how a mere AI output cannot in itself ground a practical conclusion about what to do.
在医学决策中,何时使用不透明的人工智能(AI)输出是合理的?对于负责任地使用不透明的机器学习(ML)模型而言,思考这个问题至关重要,因为这些模型已被证明在医学中能产生准确可靠的诊断、预后和治疗建议。在本文中,我将探讨对该问题的两种答案的优点。根据解释观点,临床医生必须能够获得关于输出结果产生原因的解释。根据验证观点,只要人工智能系统已按照既定的安全和可靠性标准进行了验证就足够了。我针对两条批评意见为解释观点进行辩护,并认为在循证医学的框架内,仅仅验证似乎不足以使用人工智能的输出。最后,我阐述了临床医生的认知责任,并指出仅仅一个人工智能输出本身并不能为关于该做什么的实际结论提供依据。
Camb Q Healthc Ethics. 2023-6-9
Front Genet. 2022-9-19
BMC Med Inform Decis Mak. 2020-11-30
Wiley Interdiscip Rev Data Min Knowl Discov. 2019
Forensic Sci Int Synerg. 2024-8-30
Stud Health Technol Inform. 2022-5-25
Ethics Inf Technol. 2022
Hum Brain Mapp. 2020-4-15
Bioengineering (Basel). 2025-4-2
BMC Med Inform Decis Mak. 2025-3-5
Front Med (Lausanne). 2024-10-16
BMC Med Ethics. 2024-10-1
Eur J Nucl Med Mol Imaging. 2024-6