Braun Matthias, Hummel Patrik, Beck Susanne, Dabrock Peter
Insitute for Systematic Theology, Friedrich-Alexander University Erlangen-Nürnberg (FAU), Erlangen, Germany
Insitute for Systematic Theology, Friedrich-Alexander University Erlangen-Nürnberg (FAU), Erlangen, Germany.
J Med Ethics. 2020 Apr 3;47(12):e3. doi: 10.1136/medethics-2019-105860.
Making good decisions in extremely complex and difficult processes and situations has always been both a key task as well as a challenge in the clinic and has led to a large amount of clinical, legal and ethical routines, protocols and reflections in order to guarantee fair, participatory and up-to-date pathways for clinical decision-making. Nevertheless, the complexity of processes and physical phenomena, time as well as economic constraints and not least further endeavours as well as achievements in medicine and healthcare continuously raise the need to evaluate and to improve clinical decision-making. This article scrutinises if and how clinical decision-making processes are challenged by the rise of so-called artificial intelligence-driven decision support systems (AI-DSS). In a first step, this article analyses how the rise of AI-DSS will affect and transform the modes of interaction between different agents in the clinic. In a second step, we point out how these changing modes of interaction also imply shifts in the conditions of trustworthiness, epistemic challenges regarding transparency, the underlying normative concepts of agency and its embedding into concrete contexts of deployment and, finally, the consequences for (possible) ascriptions of responsibility. Third, we draw first conclusions for further steps regarding a 'meaningful human control' of clinical AI-DSS.
在极其复杂和困难的流程与情境中做出正确决策,一直以来都是临床工作中的一项关键任务和挑战,并且催生了大量临床、法律和伦理方面的惯例、规范及思考,以确保临床决策的公平、参与性和与时俱进。然而,流程和物理现象的复杂性、时间和经济限制,以及医学和医疗保健领域的进一步努力与成就,不断增加了评估和改进临床决策的必要性。本文审视所谓的人工智能驱动的决策支持系统(AI-DSS)的兴起是否以及如何对临床决策过程构成挑战。第一步,本文分析AI-DSS的兴起将如何影响和改变临床中不同主体之间的互动模式。第二步,我们指出这些不断变化的互动模式如何也意味着在可信度条件方面的转变、关于透明度的认知挑战、主体的潜在规范概念及其在具体应用情境中的嵌入,以及最终对(可能的)责任归属的影响。第三,我们就临床AI-DSS的“有意义的人类控制”的后续步骤得出初步结论。