Tilburg Institute for Law, Markets, Technology, and Society, Tilburg Law School, Tilburg, The Netherlands.
Bioethics Institute Ghent, Ghent University, Ghent, Belgium.
Bioethics. 2022 Feb;36(2):113-120. doi: 10.1111/bioe.12924. Epub 2021 Aug 10.
The use of artificial intelligence (AI) in healthcare comes with opportunities but also numerous challenges. A specific challenge that remains underexplored is the lack of clear and distinct definitions of the concepts used in and/or produced by these algorithms, and how their real world meaning is translated into machine language and vice versa, how their output is understood by the end user. This "semantic" black box adds to the "mathematical" black box present in many AI systems in which the underlying "reasoning" process is often opaque. In this way, whereas it is often claimed that the use of AI in medical applications will deliver "objective" information, the true relevance or meaning to the end-user is frequently obscured. This is highly problematic as AI devices are used not only for diagnostic and decision support by healthcare professionals, but also can be used to deliver information to patients, for example to create visual aids for use in shared decision-making. This paper provides an examination of the range and extent of this problem and its implications, on the basis of cases from the field of intensive care nephrology. We explore how the problematic terminology used in human communication about the detection, diagnosis, treatment, and prognosis of concepts of intensive care nephrology becomes a much more complicated affair when deployed in the form of algorithmic automation, with implications extending throughout clinical care, affecting norms and practices long considered fundamental to good clinical care.
人工智能(AI)在医疗保健中的应用带来了机遇,但也面临着众多挑战。一个尚未得到充分探索的具体挑战是,这些算法中使用的和/或产生的概念缺乏明确和清晰的定义,以及它们的实际意义如何转化为机器语言,反之亦然,最终用户如何理解它们的输出。这个“语义”黑箱增加了许多 AI 系统中存在的“数学”黑箱,其中底层“推理”过程往往不透明。这样,虽然人们经常声称在医疗应用中使用 AI 将提供“客观”的信息,但最终用户的真正相关性或意义经常被掩盖。这是非常成问题的,因为 AI 设备不仅用于医疗保健专业人员的诊断和决策支持,而且还可以用于向患者提供信息,例如创建用于共同决策的可视化辅助工具。本文基于重症监护肾脏病学领域的案例,对这一问题的范围和程度及其影响进行了考察。我们探讨了当用于算法自动化时,重症监护肾脏病学概念的检测、诊断、治疗和预后方面在人类交流中使用的有问题的术语如何变得更加复杂,其影响延伸到整个临床护理领域,影响到长期以来被认为是良好临床护理基础的规范和实践。