• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

可问责的人工智能:追究算法的责任。

Accountable Artificial Intelligence: Holding Algorithms to Account.

作者信息

Busuioc Madalina

机构信息

Leiden University.

出版信息

Public Adm Rev. 2021 Sep-Oct;81(5):825-836. doi: 10.1111/puar.13293. Epub 2020 Nov 11.

DOI:10.1111/puar.13293
PMID:34690372
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC8518786/
Abstract

Artificial intelligence (AI) algorithms govern in subtle yet fundamental ways the way we live and are transforming our societies. The promise of efficient, low-cost, or "neutral" solutions harnessing the potential of big data has led public bodies to adopt algorithmic systems in the provision of public services. As AI algorithms have permeated high-stakes aspects of our public existence-from hiring and education decisions to the governmental use of enforcement powers (policing) or liberty-restricting decisions (bail and sentencing)-this necessarily raises important accountability questions: What accountability challenges do AI algorithmic systems bring with them, and how can we safeguard accountability in algorithmic decision-making? Drawing on a decidedly public administration perspective, and given the current challenges that have thus far become manifest in the field, we critically reflect on and map out in a conceptually guided manner the implications of these systems, and the limitations they pose, for public accountability.

摘要

人工智能(AI)算法正以微妙而根本的方式支配着我们的生活,并正在改变我们的社会。利用大数据潜力的高效、低成本或“中立”解决方案的前景,促使公共机构在提供公共服务时采用算法系统。随着人工智能算法渗透到我们公共生活的高风险领域——从招聘和教育决策到政府对执法权力的使用(治安)或限制自由的决策(保释和量刑)——这必然引发了重要的问责问题:人工智能算法系统带来了哪些问责挑战,以及我们如何在算法决策中保障问责制?从明确的公共行政角度出发,鉴于该领域目前已显现出的挑战,我们批判性地反思并以概念性引导的方式梳理出这些系统对公共问责制的影响以及它们所带来的局限性。

相似文献

1
Accountable Artificial Intelligence: Holding Algorithms to Account.可问责的人工智能:追究算法的责任。
Public Adm Rev. 2021 Sep-Oct;81(5):825-836. doi: 10.1111/puar.13293. Epub 2020 Nov 11.
2
Public procurement of artificial intelligence systems: new risks and future proofing.人工智能系统的公共采购:新风险与未来保障
AI Soc. 2022 Oct 2:1-15. doi: 10.1007/s00146-022-01572-2.
3
AI governance in the public sector: Three tales from the frontiers of automated decision-making in democratic settings.公共部门中的人工智能治理:民主环境下自动化决策前沿的三个故事。
Telecomm Policy. 2020 Jul;44(6):101976. doi: 10.1016/j.telpol.2020.101976. Epub 2020 Apr 17.
4
Ethical machines: The human-centric use of artificial intelligence.合乎伦理的机器:以人类为中心的人工智能应用
iScience. 2021 Mar 3;24(3):102249. doi: 10.1016/j.isci.2021.102249. eCollection 2021 Mar 19.
5
Algorithmic Accountability and Public Reason.算法问责与公共理性。
Philos Technol. 2018;31(4):543-556. doi: 10.1007/s13347-017-0263-5. Epub 2017 May 24.
6
Multi-stakeholder preferences for the use of artificial intelligence in healthcare: A systematic review and thematic analysis.多利益相关方对人工智能在医疗保健中的应用的偏好:系统评价和主题分析。
Soc Sci Med. 2023 Dec;338:116357. doi: 10.1016/j.socscimed.2023.116357. Epub 2023 Nov 4.
7
New and emerging technology for adult social care - the example of home sensors with artificial intelligence (AI) technology.成人社会关怀新技术——以具有人工智能 (AI) 技术的家庭传感器为例。
Health Soc Care Deliv Res. 2023 Jun;11(9):1-64. doi: 10.3310/HRYW4281.
8
Propagation of societal gender inequality by internet search algorithms.互联网搜索算法对社会性别不平等的传播。
Proc Natl Acad Sci U S A. 2022 Jul 19;119(29):e2204529119. doi: 10.1073/pnas.2204529119. Epub 2022 Jul 12.
9
Algorithmic discrimination: examining its types and regulatory measures with emphasis on US legal practices.算法歧视:以美国法律实践为重点审视其类型及监管措施
Front Artif Intell. 2024 May 21;7:1320277. doi: 10.3389/frai.2024.1320277. eCollection 2024.
10
Ethical, Legal, and Financial Considerations of Artificial Intelligence in Surgery.手术中人工智能的伦理、法律和财务考量
Am Surg. 2023 Jan;89(1):55-60. doi: 10.1177/00031348221117042. Epub 2022 Aug 17.

引用本文的文献

1
Protein Sequence Analysis landscape: A Systematic Review of Task Types, Databases, Datasets, Word Embeddings Methods, and Language Models.蛋白质序列分析全景:任务类型、数据库、数据集、词嵌入方法和语言模型的系统综述
Database (Oxford). 2025 May 30;2025. doi: 10.1093/database/baaf027.
2
Establishing and evaluating trustworthy AI: overview and research challenges.建立和评估可信人工智能:概述与研究挑战
Front Big Data. 2024 Nov 29;7:1467222. doi: 10.3389/fdata.2024.1467222. eCollection 2024.
3
Governance of artificial intelligence and machine learning in pharmacovigilance: what works today and what more is needed?药物警戒中人工智能和机器学习的治理:目前哪些方法可行,还需要什么?
Ther Adv Drug Saf. 2024 Oct 31;15:20420986241293303. doi: 10.1177/20420986241293303. eCollection 2024.
4
When the digits don't add up: Research strategies for post-digital peacebuilding.当数字无法累加时:后数字时代建设和平的研究策略。
Coop Confl. 2024 Sep;59(3):425-446. doi: 10.1177/00108367231184727. Epub 2023 Aug 12.
5
Trust, trustworthiness and AI governance.信任、可信度与人工智能治理。
Sci Rep. 2024 Sep 5;14(1):20752. doi: 10.1038/s41598-024-71761-0.
6
Do large language models have a legal duty to tell the truth?大型语言模型有说实话的法律义务吗?
R Soc Open Sci. 2024 Aug 7;11(8):240197. doi: 10.1098/rsos.240197. eCollection 2024 Aug.
7
Conceptualizing Automated Decision-Making in Organizational Contexts.组织环境中自动化决策的概念化
Philos Technol. 2024;37(3):92. doi: 10.1007/s13347-024-00773-5. Epub 2024 Jul 16.
8
Scoping Review Shows the Dynamics and Complexities Inherent to the Notion of "Responsibility" in Artificial Intelligence within the Healthcare Context.范围审查揭示了医疗保健背景下人工智能中“责任”概念所固有的动态性和复杂性。
Asian Bioeth Rev. 2024 Jun 11;16(3):315-344. doi: 10.1007/s41649-024-00292-7. eCollection 2024 Jul.
9
Mobile Diagnostic Clinics.移动诊断诊所。
ACS Sens. 2024 Jun 28;9(6):2777-2792. doi: 10.1021/acssensors.4c00636. Epub 2024 May 22.
10
Psychotherapy, artificial intelligence and adolescents: ethical aspects.心理治疗、人工智能和青少年:伦理方面。
J Prev Med Hyg. 2024 Jan 1;64(4):E438-E442. doi: 10.15167/2421-4248/jpmh2023.64.4.3135. eCollection 2023 Dec.

本文引用的文献

1
Stop Explaining Black Box Machine Learning Models for High Stakes Decisions and Use Interpretable Models Instead.停止为高风险决策解释黑箱机器学习模型,转而使用可解释模型。
Nat Mach Intell. 2019 May;1(5):206-215. doi: 10.1038/s42256-019-0048-x. Epub 2019 May 13.
2
The accuracy, fairness, and limits of predicting recidivism.预测累犯的准确性、公正性和局限性。
Sci Adv. 2018 Jan 17;4(1):eaao5580. doi: 10.1126/sciadv.aao5580. eCollection 2018 Jan.
3
Automation bias and verification complexity: a systematic review.自动化偏差与验证复杂性:一项系统综述
J Am Med Inform Assoc. 2017 Mar 1;24(2):423-431. doi: 10.1093/jamia/ocw105.
4
The social dilemma of autonomous vehicles.自动驾驶汽车的社会困境。
Science. 2016 Jun 24;352(6293):1573-6. doi: 10.1126/science.aaf2654.