Steen Marc, Timan Tjerk, Van Diggelen Jurriaan, Vethman Steven
TNO, The Hague, Netherlands.
AI Soc. 2025;40(5):3615-3626. doi: 10.1007/s00146-024-02101-z. Epub 2024 Oct 29.
In this article, we critique the ways in which the people involved in the development and application of AI systems often visualize and talk about AI systems. Often, they visualize such systems as shiny humanoid robots or as free-floating electronic brains. Such images convey misleading messages; as if AI works independently of people and can reason in ways superior to people. Instead, we propose to visualize AI systems as parts of larger, sociotechnical systems. Here, we can learn, for example, from cybernetics. Similarly, we propose that the people involved in the design and deployment of an algorithm would need to extend their conversations beyond the four boxes of the , for example, to critically discuss and . We present two thought experiments, with one practical example in each. We propose to understand, visualize, and talk about AI systems in relation to a larger, complex reality; this is the requirement of . We also propose to enable people from diverse disciplines to collaborate around , for example: a drawing of an AI system in its sociotechnical context; or an 'extended' Error Matrix. Such interventions can promote meaningful human control, transparency, and fairness in the design and deployment of AI systems.
在本文中,我们批判了参与人工智能系统开发和应用的人员常常对人工智能系统进行可视化呈现及谈论的方式。通常,他们将此类系统想象成闪亮的类人机器人或漂浮的电子大脑。这些形象传递了误导性信息,仿佛人工智能独立于人类运作,且能以优于人类的方式进行推理。相反,我们建议将人工智能系统视为更大的社会技术系统的组成部分。在此,我们可以从控制论中汲取经验。同样,我们提议参与算法设计与部署的人员将其讨论范围扩展至例如超越[此处未明确提及的四个盒子相关内容],以批判性地讨论[此处未明确提及的内容]和[此处未明确提及的内容]。我们给出两个思想实验,每个实验都配有一个实际例子。我们提议从与一个更大、更复杂的现实相关的角度去理解、可视化呈现并谈论人工智能系统;这是[此处未明确提及的要求]。我们还提议让来自不同学科的人员围绕[此处未明确提及的内容]展开协作,例如:一幅处于其社会技术背景下的人工智能系统图;或者一个“扩展的”错误矩阵。此类干预措施能够在人工智能系统的设计和部署中促进有意义的人类控制、透明度和公平性。