• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

对人工智能的制度化不信任与人为监督:迈向欧盟人工智能法案下人工智能治理的民主设计

Institutionalised distrust and human oversight of artificial intelligence: towards a democratic design of AI governance under the European Union AI Act.

作者信息

Laux Johann

机构信息

British Academy Postdoctoral Fellow, Oxford Internet Institute, University of Oxford, 1 St Giles', Oxford, OX1 3JS UK.

出版信息

AI Soc. 2024;39(6):2853-2866. doi: 10.1007/s00146-023-01777-z. Epub 2023 Oct 6.

DOI:10.1007/s00146-023-01777-z
PMID:39640298
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC11614927/
Abstract

Human oversight has become a key mechanism for the governance of artificial intelligence ("AI"). Human overseers are supposed to increase the accuracy and safety of AI systems, uphold human values, and build trust in the technology. Empirical research suggests, however, that humans are not reliable in fulfilling their oversight tasks. They may be lacking in competence or be harmfully incentivised. This creates a challenge for human oversight to be effective. In addressing this challenge, this article aims to make three contributions. First, it surveys the emerging laws of oversight, most importantly the European Union's Artificial Intelligence Act ("AIA"). It will be shown that while the AIA is concerned with the competence of human overseers, it does not provide much guidance on how to achieve effective oversight and leaves oversight obligations for AI developers underdefined. Second, this article presents a novel taxonomy of human oversight roles, differentiated along whether human intervention is constitutive to, or corrective of a decision made or supported by an AI. The taxonomy allows to propose suggestions for improving effectiveness tailored to the type of oversight in question. Third, drawing on scholarship within democratic theory, this article formulates six normative principles which institutionalise distrust in human oversight of AI. The institutionalisation of distrust has historically been practised in democratic governance. Applied for the first time to AI governance, the principles anticipate the fallibility of human overseers and seek to mitigate them at the level of institutional design. They aim to directly increase the trustworthiness of human oversight and to indirectly inspire well-placed trust in AI governance.

摘要

人为监督已成为人工智能(“AI”)治理的关键机制。人类监督者理应提高人工智能系统的准确性和安全性,维护人类价值观,并建立对该技术的信任。然而,实证研究表明,人类在履行监督任务时并不可靠。他们可能缺乏能力,或者受到有害激励。这给有效进行人为监督带来了挑战。为应对这一挑战,本文旨在做出三项贡献。首先,它审视了新兴的监督法律,最重要的是欧盟的《人工智能法案》(“AIA”)。将表明,虽然《人工智能法案》关注人类监督者的能力,但它并未就如何实现有效监督提供太多指导,且对人工智能开发者的监督义务界定不明确。其次,本文提出了一种新颖的人为监督角色分类法,根据人类干预是构成人工智能做出或支持的决策,还是对该决策进行纠正来加以区分。这种分类法能够针对相关监督类型提出提高有效性的建议。第三,借鉴民主理论中的学术成果,本文制定了六项规范性原则,将对人工智能人为监督的不信任制度化。不信任的制度化在民主治理中由来已久。这些原则首次应用于人工智能治理,预见到人类监督者可能犯错,并试图在制度设计层面减轻这些问题。它们旨在直接提高人为监督的可信度,并间接激发对人工智能治理的合理信任。

相似文献

1
Institutionalised distrust and human oversight of artificial intelligence: towards a democratic design of AI governance under the European Union AI Act.对人工智能的制度化不信任与人为监督:迈向欧盟人工智能法案下人工智能治理的民主设计
AI Soc. 2024;39(6):2853-2866. doi: 10.1007/s00146-023-01777-z. Epub 2023 Oct 6.
2
How the EU AI Act Seeks to Establish an Epistemic Environment of Trust.欧盟人工智能法案如何寻求建立一个可信赖的认知环境。
Asian Bioeth Rev. 2024 Jun 24;16(3):345-372. doi: 10.1007/s41649-024-00304-6. eCollection 2024 Jul.
3
Is human oversight to AI systems still possible?对人工智能系统进行人为监督是否仍然可行?
N Biotechnol. 2025 Mar 25;85:59-62. doi: 10.1016/j.nbt.2024.12.003. Epub 2024 Dec 13.
4
Regulating algorithmic care in the European Union: evolving doctor-patient models through the Artificial Intelligence Act (AI-Act) and the liability directives.欧盟对算法辅助医疗的监管:通过《人工智能法案》(AI法案)和责任指令演变医患模式
Med Law Rev. 2025 Jan 4;33(1). doi: 10.1093/medlaw/fwae033.
5
Public governance of medical artificial intelligence research in the UK: an integrated multi-scale model.英国医学人工智能研究的公共治理:一种综合多尺度模型。
Res Involv Engagem. 2022 May 21;8(1):21. doi: 10.1186/s40900-022-00357-7.
6
Trust, trustworthiness and AI governance.信任、可信度与人工智能治理。
Sci Rep. 2024 Sep 5;14(1):20752. doi: 10.1038/s41598-024-71761-0.
7
Conformity Assessments and Post-market Monitoring: A Guide to the Role of Auditing in the Proposed European AI Regulation.合格评定与上市后监管:审计在拟议的欧洲人工智能法规中的作用指南
Minds Mach (Dordr). 2022;32(2):241-268. doi: 10.1007/s11023-021-09577-4. Epub 2021 Nov 5.
8
Trustworthy artificial intelligence and the European Union AI act: On the conflation of trustworthiness and acceptability of risk.可信人工智能与欧盟人工智能法案:论可信度与风险可接受性的 conflation(此处conflation可结合语境意译为“混淆”等,因无更多背景较难准确翻译,保留英文供进一步理解)
Regul Gov. 2024 Jan;18(1):3-32. doi: 10.1111/rego.12512. Epub 2023 Feb 6.
9
Trustworthy Artificial Intelligence in Dentistry: Learnings from the EU AI Act.口腔医学中的可信人工智能:来自欧盟人工智能法案的启示。
J Dent Res. 2024 Oct;103(11):1051-1056. doi: 10.1177/00220345241271160. Epub 2024 Sep 23.
10
Principles for enhancing trust in artificial intelligence systems among medical imaging professionals in Ghana: A nationwide cross-sectional study.加纳医学影像专业人员增强对人工智能系统信任的原则:一项全国性横断面研究。
Radiography (Lond). 2025 May;31(3):102953. doi: 10.1016/j.radi.2025.102953. Epub 2025 Apr 13.

引用本文的文献

1
Stakeholder Perspectives on Trustworthy AI for Parkinson Disease Management Using a Cocreation Approach: Qualitative Exploratory Study.利益相关者对使用共创方法进行帕金森病管理的可信人工智能的看法:定性探索性研究
J Med Internet Res. 2025 Aug 6;27:e73710. doi: 10.2196/73710.
2
Michael is better than Mehmet: exploring the perils of algorithmic biases and selective adherence to advice from automated decision support systems in hiring.迈克尔比梅赫梅特更优秀:探究算法偏见的危害以及在招聘中对自动化决策支持系统建议的选择性遵循。
Front Psychol. 2024 Sep 10;15:1416504. doi: 10.3389/fpsyg.2024.1416504. eCollection 2024.

本文引用的文献

1
Trustworthy artificial intelligence and the European Union AI act: On the conflation of trustworthiness and acceptability of risk.可信人工智能与欧盟人工智能法案:论可信度与风险可接受性的 conflation(此处conflation可结合语境意译为“混淆”等,因无更多背景较难准确翻译,保留英文供进一步理解)
Regul Gov. 2024 Jan;18(1):3-32. doi: 10.1111/rego.12512. Epub 2023 Feb 6.
2
On the purpose of meaningful human control of AI.旨在实现人类对人工智能的有意义控制。
Front Big Data. 2023 Jan 9;5:1017677. doi: 10.3389/fdata.2022.1017677. eCollection 2022.
3
An artificial intelligence life cycle: From conception to production.人工智能生命周期:从概念到产品。
Patterns (N Y). 2022 Apr 13;3(6):100489. doi: 10.1016/j.patter.2022.100489. eCollection 2022 Jun 10.
4
Conformity Assessments and Post-market Monitoring: A Guide to the Role of Auditing in the Proposed European AI Regulation.合格评定与上市后监管:审计在拟议的欧洲人工智能法规中的作用指南
Minds Mach (Dordr). 2022;32(2):241-268. doi: 10.1007/s11023-021-09577-4. Epub 2021 Nov 5.
5
Algorithmic Accountability and Public Reason.算法问责与公共理性。
Philos Technol. 2018;31(4):543-556. doi: 10.1007/s13347-017-0263-5. Epub 2017 May 24.
6
Algorithm aversion: people erroneously avoid algorithms after seeing them err.算法厌恶:人们在看到算法出错后会错误地避免使用算法。
J Exp Psychol Gen. 2015 Feb;144(1):114-26. doi: 10.1037/xge0000033. Epub 2014 Nov 17.
7
Complacency and bias in human use of automation: an attentional integration.人类在使用自动化时的自满和偏见:注意力的综合。
Hum Factors. 2010 Jun;52(3):381-410. doi: 10.1177/0018720810376055.
8
Groups of diverse problem solvers can outperform groups of high-ability problem solvers.由不同问题解决者组成的团队比由高能力问题解决者组成的团队表现更出色。
Proc Natl Acad Sci U S A. 2004 Nov 16;101(46):16385-9. doi: 10.1073/pnas.0403723101. Epub 2004 Nov 8.