Ryan Mark
Wageningen Economic Research, Wageningen University and Research, Droevendaalsesteeg 4, 6708 PB Wageningen, The Netherlands.
AI Soc. 2025;40(3):1303-1319. doi: 10.1007/s00146-024-01976-2. Epub 2024 Jun 4.
The use of a 'human-centred' artificial intelligence approach (HCAI) has substantially increased over the past few years in academic texts (1600 +); institutions (27 Universities have HCAI labs, such as Stanford, Sydney, Berkeley, and Chicago); in tech companies (e.g., Microsoft, IBM, and Google); in politics (e.g., G7, G20, UN, EU, and EC); and major institutional bodies (e.g., World Bank, World Economic Forum, UNESCO, and OECD). Intuitively, it sounds very appealing: placing human concerns at the centre of AI development and use. However, this paper will use insights from the works of Michel Foucault (mostly ) to argue that the HCAI approach is deeply problematic in its assumptions. In particular, this paper will criticise four main assumptions commonly found within HCAI: human-AI hybridisation is desirable and unproblematic; humans are not currently at the centre of the AI universe; we should use humans as a way to guide AI development; AI is the next step in a continuous path of human progress; and increasing human control over AI will reduce harmful bias. This paper will contribute to the field of philosophy of technology by using Foucault's analysis to examine assumptions found in HCAI [it provides a Foucauldian conceptual analysis of a current approach (human-centredness) that aims to influence the design and development of a transformative technology (AI)], it will contribute to AI ethics debates by offering a critique of human-centredness in AI (by choosing Foucault, it provides a bridge between older ideas with contemporary issues), and it will also contribute to Foucault studies (by using his work to engage in contemporary debates, such as AI).
在过去几年里,“以人类为中心”的人工智能方法(HCAI)在学术文本(1600余篇)、机构(27所大学设有HCAI实验室,如斯坦福大学、悉尼大学、加州大学伯克利分校和芝加哥大学)、科技公司(如微软、国际商业机器公司和谷歌)、政治领域(如七国集团、二十国集团、联合国、欧盟和欧洲委员会)以及主要机构团体(如世界银行、世界经济论坛、联合国教科文组织和经济合作与发展组织)中的应用大幅增加。直观地说,这听起来非常有吸引力:将人类关切置于人工智能开发和使用的中心。然而,本文将运用米歇尔·福柯著作中的见解(主要是 )来论证,HCAI方法在其假设方面存在严重问题。具体而言,本文将批评HCAI中常见的四个主要假设:人类与人工智能的融合是可取且无问题的;人类目前并非处于人工智能世界的中心;我们应以人类为导向来推动人工智能发展;人工智能是人类持续进步道路上的下一步;以及增强人类对人工智能的控制将减少有害偏见。本文将通过运用福柯的分析来审视HCAI中的假设,为技术哲学领域做出贡献[它对旨在影响变革性技术(人工智能)设计和开发的当前方法(以人类为中心)进行了福柯式的概念分析],通过对人工智能中的以人类为中心进行批判,为人工智能伦理辩论做出贡献(通过选择福柯,它在旧观念与当代问题之间架起了一座桥梁),并且还将通过运用他的作品参与当代辩论(如人工智能),为福柯研究做出贡献。