Hannover Medical School.
Oslo Metropolitan University.
Am J Bioeth. 2024 Sep;24(9):67-78. doi: 10.1080/15265161.2024.2353800. Epub 2024 May 20.
Within the ethical debate on Machine Learning-driven decision support systems (ML_CDSS), notions such as "human in the loop" or "meaningful human control" are often cited as being necessary for ethical legitimacy. In addition, ethical principles usually serve as the major point of reference in ethical guidance documents, stating that conflicts between principles need to be weighed and balanced against each other. Starting from a neo-Kantian viewpoint inspired by Onora O'Neill, this article makes a concrete suggestion of how to interpret the role of the "human in the loop" and to overcome the perspective of rivaling ethical principles in the evaluation of AI in health care. We argue that patients should be perceived as "fellow workers" and epistemic partners in the interpretation of ML_CDSS outputs. We further highlight that a meaningful process of integrating (rather than weighing and balancing) ethical principles is most appropriate in the evaluation of medical AI.
在关于机器学习驱动的决策支持系统(ML_CDSS)的伦理争论中,“人机交互”或“有意义的人为控制”等概念常被认为是伦理合法性所必需的。此外,伦理原则通常作为伦理指导文件中的主要参考点,指出需要权衡和平衡原则之间的冲突。本文从奥诺拉·奥尼尔(Onora O'Neill)启发的新康德主义观点出发,对如何解释“人机交互”的作用以及克服在医疗保健中评估人工智能时相互竞争的伦理原则的观点提出了具体建议。我们认为,应该将患者视为解释 ML_CDSS 输出的“同行”和认知伙伴。我们进一步强调,在评估医疗人工智能时,整合(而不是权衡和平衡)伦理原则的有意义的过程是最合适的。