Suppr超能文献

对人工智能的制度化不信任与人为监督:迈向欧盟人工智能法案下人工智能治理的民主设计

Institutionalised distrust and human oversight of artificial intelligence: towards a democratic design of AI governance under the European Union AI Act.

作者信息

Laux Johann

机构信息

British Academy Postdoctoral Fellow, Oxford Internet Institute, University of Oxford, 1 St Giles', Oxford, OX1 3JS UK.

出版信息

AI Soc. 2024;39(6):2853-2866. doi: 10.1007/s00146-023-01777-z. Epub 2023 Oct 6.

Abstract

Human oversight has become a key mechanism for the governance of artificial intelligence ("AI"). Human overseers are supposed to increase the accuracy and safety of AI systems, uphold human values, and build trust in the technology. Empirical research suggests, however, that humans are not reliable in fulfilling their oversight tasks. They may be lacking in competence or be harmfully incentivised. This creates a challenge for human oversight to be effective. In addressing this challenge, this article aims to make three contributions. First, it surveys the emerging laws of oversight, most importantly the European Union's Artificial Intelligence Act ("AIA"). It will be shown that while the AIA is concerned with the competence of human overseers, it does not provide much guidance on how to achieve effective oversight and leaves oversight obligations for AI developers underdefined. Second, this article presents a novel taxonomy of human oversight roles, differentiated along whether human intervention is constitutive to, or corrective of a decision made or supported by an AI. The taxonomy allows to propose suggestions for improving effectiveness tailored to the type of oversight in question. Third, drawing on scholarship within democratic theory, this article formulates six normative principles which institutionalise distrust in human oversight of AI. The institutionalisation of distrust has historically been practised in democratic governance. Applied for the first time to AI governance, the principles anticipate the fallibility of human overseers and seek to mitigate them at the level of institutional design. They aim to directly increase the trustworthiness of human oversight and to indirectly inspire well-placed trust in AI governance.

摘要

人为监督已成为人工智能(“AI”)治理的关键机制。人类监督者理应提高人工智能系统的准确性和安全性,维护人类价值观,并建立对该技术的信任。然而,实证研究表明,人类在履行监督任务时并不可靠。他们可能缺乏能力,或者受到有害激励。这给有效进行人为监督带来了挑战。为应对这一挑战,本文旨在做出三项贡献。首先,它审视了新兴的监督法律,最重要的是欧盟的《人工智能法案》(“AIA”)。将表明,虽然《人工智能法案》关注人类监督者的能力,但它并未就如何实现有效监督提供太多指导,且对人工智能开发者的监督义务界定不明确。其次,本文提出了一种新颖的人为监督角色分类法,根据人类干预是构成人工智能做出或支持的决策,还是对该决策进行纠正来加以区分。这种分类法能够针对相关监督类型提出提高有效性的建议。第三,借鉴民主理论中的学术成果,本文制定了六项规范性原则,将对人工智能人为监督的不信任制度化。不信任的制度化在民主治理中由来已久。这些原则首次应用于人工智能治理,预见到人类监督者可能犯错,并试图在制度设计层面减轻这些问题。它们旨在直接提高人为监督的可信度,并间接激发对人工智能治理的合理信任。

相似文献

2
How the EU AI Act Seeks to Establish an Epistemic Environment of Trust.欧盟人工智能法案如何寻求建立一个可信赖的认知环境。
Asian Bioeth Rev. 2024 Jun 24;16(3):345-372. doi: 10.1007/s41649-024-00304-6. eCollection 2024 Jul.
3
Is human oversight to AI systems still possible?对人工智能系统进行人为监督是否仍然可行?
N Biotechnol. 2025 Mar 25;85:59-62. doi: 10.1016/j.nbt.2024.12.003. Epub 2024 Dec 13.
6
Trust, trustworthiness and AI governance.信任、可信度与人工智能治理。
Sci Rep. 2024 Sep 5;14(1):20752. doi: 10.1038/s41598-024-71761-0.

本文引用的文献

2
On the purpose of meaningful human control of AI.旨在实现人类对人工智能的有意义控制。
Front Big Data. 2023 Jan 9;5:1017677. doi: 10.3389/fdata.2022.1017677. eCollection 2022.
3
An artificial intelligence life cycle: From conception to production.人工智能生命周期:从概念到产品。
Patterns (N Y). 2022 Apr 13;3(6):100489. doi: 10.1016/j.patter.2022.100489. eCollection 2022 Jun 10.
5
Algorithmic Accountability and Public Reason.算法问责与公共理性。
Philos Technol. 2018;31(4):543-556. doi: 10.1007/s13347-017-0263-5. Epub 2017 May 24.

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验