Institute for Biomedical Ethics, University of Basel, Basel, Switzerland.
Institute for Research on Socio-Economic Inequality (IRSEI) in the Department of Social Sciences, University of Luxembourg, Esch-Sur-Alzette, Luxembourg.
Sci Eng Ethics. 2024 Jun 4;30(3):24. doi: 10.1007/s11948-024-00486-0.
While the technologies that enable Artificial Intelligence (AI) continue to advance rapidly, there are increasing promises regarding AI's beneficial outputs and concerns about the challenges of human-computer interaction in healthcare. To address these concerns, institutions have increasingly resorted to publishing AI guidelines for healthcare, aiming to align AI with ethical practices. However, guidelines as a form of written language can be analyzed to recognize the reciprocal links between its textual communication and underlying societal ideas. From this perspective, we conducted a discourse analysis to understand how these guidelines construct, articulate, and frame ethics for AI in healthcare. We included eight guidelines and identified three prevalent and interwoven discourses: (1) AI is unavoidable and desirable; (2) AI needs to be guided with (some forms of) principles (3) trust in AI is instrumental and primary. These discourses signal an over-spillage of technical ideals to AI ethics, such as over-optimism and resulting hyper-criticism. This research provides insights into the underlying ideas present in AI guidelines and how guidelines influence the practice and alignment of AI with ethical, legal, and societal values expected to shape AI in healthcare.
虽然人工智能(AI)的技术不断快速发展,但 AI 的有益输出方面的承诺越来越多,而人机交互在医疗保健方面的挑战也越来越受到关注。为了解决这些担忧,各机构越来越多地求助于发布人工智能医疗保健指南,旨在使 AI 符合道德规范。然而,指南作为一种书面语言,可以进行分析,以认识到其文本交流与潜在社会观念之间的相互联系。从这个角度出发,我们进行了话语分析,以了解这些指南如何构建、阐明和构建医疗保健中 AI 的伦理框架。我们包括了八项指南,并确定了三个普遍存在且相互交织的论述:(1)人工智能是不可避免的和可取的;(2)需要用(某些形式的)原则来指导 AI;(3)对 AI 的信任是工具性的和主要的。这些论述标志着技术理想对人工智能伦理的过度溢出,例如过度乐观和由此产生的过度批评。这项研究深入了解了人工智能指南中存在的基本理念,以及指南如何影响 AI 与预期塑造医疗保健中 AI 的伦理、法律和社会价值观的实践和一致性。