Suppr超能文献

捍卫可解释性原则:人工智能在医学伦理中的应用

Defending explicability as a principle for the ethics of artificial intelligence in medicine.

机构信息

Centre for Medical Ethics, Institute of Health and Society, Faculty of Medicine, University of Oslo, Kirkeveien 166, Fredrik Holsts hus, Oslo, 0450, Norway.

出版信息

Med Health Care Philos. 2023 Dec;26(4):615-623. doi: 10.1007/s11019-023-10175-7. Epub 2023 Aug 29.

Abstract

The difficulty of explaining the outputs of artificial intelligence (AI) models and what has led to them is a notorious ethical problem wherever these technologies are applied, including in the medical domain, and one that has no obvious solution. This paper examines the proposal, made by Luciano Floridi and colleagues, to include a new 'principle of explicability' alongside the traditional four principles of bioethics that make up the theory of 'principlism'. It specifically responds to a recent set of criticisms that challenge the supposed need for such a principle to perform an enabling role in relation to the traditional four principles and therefore suggest that these four are sufficient without the addition of explicability. The paper challenges the critics' premise that explicability cannot be an ethical principle like the classic four because it is explicitly subordinate to them. It argues instead that principlism in its original formulation locates the justification for ethical principles in a midlevel position such that they mediate between the most general moral norms and the contextual requirements of medicine. This conception of an ethical principle then provides a mold for an approach to explicability on which it functions as an enabling principle that unifies technical/epistemic demands on AI and the requirements of high-level ethical theories. The paper finishes by anticipating an objection that decision-making by clinicians and AI fall equally, but implausibly, under the principle of explicability's scope, which it rejects on the grounds that human decisions, unlike AI's, can be explained by their social environments.

摘要

解释人工智能 (AI) 模型的输出以及导致这些输出的原因的难度是这些技术应用中(包括医学领域)一个臭名昭著的道德问题,而且这个问题没有明显的解决方案。本文探讨了 Luciano Floridi 和同事提出的建议,即在构成“原则主义”理论的传统四项生物伦理学原则之外,纳入一项新的“可解释性原则”。它特别回应了最近的一系列批评,这些批评质疑了这样一个原则在传统四项原则方面发挥支持作用的必要性,因此表明,在不需要可解释性的情况下,这四项原则就足够了。本文反驳了批评者的前提,即可解释性不能像经典的四项原则那样成为一项伦理原则,因为它明确从属于它们。相反,它认为,原则主义在其最初的表述中,将伦理原则的理由定位在一个中层位置,以便它们在最一般的道德规范和医学的语境要求之间进行调解。然后,这种伦理原则的概念为一种可解释性方法提供了一个模式,它作为一个支持性原则发挥作用,将对 AI 的技术/知识要求与高级伦理理论的要求统一起来。本文最后预计会出现一个反对意见,即临床医生和 AI 的决策同样但不合理地属于可解释性原则的范围,本文对此表示反对,理由是人类的决策与 AI 的决策不同,可以通过其社会环境来解释。

相似文献

5
What is the outcome of applying principlism?应用原则论的结果是什么?
Theor Med Bioeth. 2011 Dec;32(6):375-88. doi: 10.1007/s11017-011-9185-x.
6
The principlism debate: a critical overview.原则主义辩论:批判性概述。
J Med Philos. 1995 Feb;20(1):85-105. doi: 10.1093/jmp/20.1.85.
8
Common Morality Principles in Biomedical Ethics: Responses to Critics.生物医学伦理学中的普通道德原则:对批评者的回应
Camb Q Healthc Ethics. 2022 Apr;31(2):164-176. doi: 10.1017/S0963180121000566. Epub 2021 Sep 13.
10
Common morality as an alternative to principlism.作为原则主义替代方案的普通道德。
Kennedy Inst Ethics J. 1995 Sep;5(3):219-36. doi: 10.1353/ken.0.0166.

引用本文的文献

1
AI ethics for the everyday intensivist.面向日常工作中的重症监护医生的人工智能伦理
Crit Care Resusc. 2025 Jun 26;27(2):100115. doi: 10.1016/j.ccrj.2025.100115. eCollection 2025 Jun.
6
Artificial Intelligence in Perioperative Planning and Management of Liver Resection.人工智能在肝切除围手术期规划与管理中的应用
Indian J Surg Oncol. 2024 May;15(Suppl 2):186-195. doi: 10.1007/s13193-024-01883-4. Epub 2024 Jan 23.

本文引用的文献

5
Algorithmic Accountability and Public Reason.算法问责与公共理性。
Philos Technol. 2018;31(4):543-556. doi: 10.1007/s13347-017-0263-5. Epub 2017 May 24.
9
Principalism and moral dilemmas: a new principle.原则主义与道德困境:一项新原则。
J Med Ethics. 2005 Feb;31(2):101-5. doi: 10.1136/jme.2004.007856.

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验