Sigfrids Anton, Leikas Jaana, Salo-Pöntinen Henrikki, Koskimies Emmi
VTT Technical Research Centre of Finland Ltd, Espoo, Finland.
Faculty of Information Technology, Cognitive Science, University of Jyväskylä, Jyväskylä, Finland.
Front Artif Intell. 2023 Feb 14;6:976887. doi: 10.3389/frai.2023.976887. eCollection 2023.
Human-centricity is considered a central aspect in the development and governance of artificial intelligence (AI). Various strategies and guidelines highlight the concept as a key goal. However, we argue that current uses of Human-Centered AI (HCAI) in policy documents and AI strategies risk downplaying promises of creating desirable, emancipatory technology that promotes human wellbeing and the common good. Firstly, HCAI, as it appears in policy discourses, is the result of aiming to adapt the concept of human-centered design (HCD) to the public governance context of AI but without proper reflection on how it should be reformed to suit the new task environment. Second, the concept is mainly used in reference to realizing human and fundamental rights, which are necessary, but not sufficient for technological emancipation. Third, the concept is used ambiguously in policy and strategy discourses, making it unclear how it should be operationalized in governance practices. This article explores means and approaches for using the HCAI approach for technological emancipation in the context of public AI governance. We propose that the potential for emancipatory technology development rests on expanding the traditional user-centered view of technology design to involve community- and society-centered perspectives in public governance. Developing public AI governance in this way relies on enabling inclusive governance modalities that enhance the social sustainability of AI deployment. We discuss mutual trust, transparency, communication, and civic tech as key prerequisites for socially sustainable and human-centered public AI governance. Finally, the article introduces a systemic approach to ethically and socially sustainable, human-centered AI development and deployment.
以人类为中心被视为人工智能(AI)发展与治理的核心要素。各种策略和指导方针都强调这一概念是关键目标。然而,我们认为,政策文件和AI战略中当前对以人为本的人工智能(HCAI)的应用,有可能淡化创造促进人类福祉和共同利益的理想、解放性技术的承诺。首先,政策话语中出现的HCAI,是旨在将以人为本的设计(HCD)概念应用于AI公共治理背景的结果,但却没有对其应如何改革以适应新的任务环境进行适当反思。其次,该概念主要用于提及实现人权和基本权利,这是必要的,但对于技术解放而言并不充分。第三,该概念在政策和战略话语中的使用含混不清,使得其在治理实践中应如何实施尚不明晰。本文探讨在公共AI治理背景下运用HCAI方法实现技术解放的手段和途径。我们建议,解放性技术发展的潜力在于将传统的以用户为中心的技术设计观点扩展到在公共治理中纳入以社区和社会为中心的视角。以这种方式发展公共AI治理依赖于实现包容性治理模式,以增强AI部署的社会可持续性。我们将相互信任、透明度、沟通和公民技术视为社会可持续和以人为本的公共AI治理的关键前提条件。最后,本文介绍一种系统方法,以实现符合伦理和社会可持续的、以人为本的AI开发与部署。