Rathenau Instituut, Den Haag, The Netherlands.
Department of Philosophy, University of Twente, Enschede, The Netherlands.
J Eval Clin Pract. 2021 Jun;27(3):520-528. doi: 10.1111/jep.13541. Epub 2021 Feb 7.
Despite the great promises that artificial intelligence (AI) holds for health care, the uptake of such technologies into medical practice is slow. In this paper, we focus on the epistemological issues arising from the development and implementation of a class of AI for clinical practice, namely clinical decision support systems (CDSS). We will first provide an overview of the epistemic tasks of medical professionals, and then analyse which of these tasks can be supported by CDSS, while also explaining why some of them should remain the territory of human experts. Clinical decision making involves a reasoning process in which clinicians combine different types of information into a coherent and adequate 'picture of the patient' that enables them to draw explainable and justifiable conclusions for which they bear epistemological responsibility. Therefore, we suggest that it is more appropriate to think of a CDSS as clinical reasoning support systems (CRSS). Developing CRSS that support clinicians' reasoning process therefore requires that: (a) CRSSs are developed on the basis of relevant and well-processed data; and (b) the system facilitates an interaction with the clinician. Therefore, medical experts must collaborate closely with AI experts developing the CRSS. In addition, responsible use of an CRSS requires that the data generated by the CRSS is empirically justified through an empirical link with the individual patient. In practice, this means that the system indicates what factors contributed to arriving at an advice, allowing the user (clinician) to evaluate whether these factors are medically plausible and applicable to the patient. Finally, we defend that proper implementation of CRSS allows combining human and artificial intelligence into hybrid intelligence, were both perform clearly delineated and complementary empirical tasks. Whereas CRSSs can assist with statistical reasoning and finding patterns in complex data, it is the clinicians' task to interpret, integrate and contextualize.
尽管人工智能 (AI) 为医疗保健带来了巨大的希望,但这些技术在医疗实践中的应用速度仍然缓慢。在本文中,我们专注于开发和实施一类用于临床实践的人工智能(即临床决策支持系统 (CDSS))所带来的认识论问题。我们将首先概述医疗专业人员的认识论任务,然后分析哪些任务可以由 CDSS 支持,同时也解释为什么其中一些任务仍应属于人类专家的领域。临床决策涉及到一个推理过程,在这个过程中,临床医生将不同类型的信息结合起来,形成一个连贯和充分的“患者图像”,使他们能够为自己承担认识论责任的可解释和合理的结论。因此,我们建议将 CDSS 更恰当地视为临床推理支持系统 (CRSS)。开发支持临床医生推理过程的 CRSS 因此需要:(a) CRSS 是基于相关和处理良好的数据开发的;和 (b) 系统促进与临床医生的交互。因此,医学专家必须与开发 CRSS 的 AI 专家密切合作。此外,负责任地使用 CRSS 需要通过与个体患者的经验联系来验证 CRSS 生成的数据。在实践中,这意味着系统要指出促成建议的因素,允许用户(临床医生)评估这些因素在医学上是否合理且适用于患者。最后,我们主张正确实施 CRSS 允许将人类智能和人工智能结合到混合智能中,在这种智能中,两者都执行明确划定且互补的经验任务。虽然 CRSS 可以协助进行统计推理和在复杂数据中发现模式,但解释、整合和情境化是临床医生的任务。