• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

对人工智能技术统治论的浅薄辩护:审视政府领域算法治理的政治危害。

A shallow defence of a technocracy of artificial intelligence: Examining the political harms of algorithmic governance in the domain of government.

作者信息

Sætra Henrik Skaug

机构信息

Østfold University College, Remmen, 1757, Halden, Norway.

出版信息

Technol Soc. 2020 Aug;62:101283. doi: 10.1016/j.techsoc.2020.101283. Epub 2020 Jun 8.

DOI:10.1016/j.techsoc.2020.101283
PMID:32536737
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC7278651/
Abstract

Artificial intelligence (AI) has proven to be superior to human decision-making in certain areas. This is particularly the case whenever there is a need for advanced strategic reasoning and analysis of vast amounts of data in order to solve complex problems. Few human activities fit this description better than politics. In politics we deal with some of the most complex issues humans face, short-term and long-term consequences have to be balanced, and we make decisions knowing that we do not fully understand their consequences. I examine an extreme case of the application of AI in the domain of government, and use this case to examine a subset of the potential harms associated with algorithmic governance. I focus on five objections based on political theoretical considerations and the potential harms of an AI technocracy. These are objections based on the ideas of 'political man' and participation as a prerequisite for legitimacy, the non-morality of machines and the value of transparency and accountability. I conclude that these objections do not successfully derail AI technocracy, if we make sure that mechanisms for control and backup are in place, and if we design a system in which humans have control over the direction and fundamental goals of society. Such a technocracy, if the AI capabilities of policy formation here assumed becomes reality, may, in theory, provide us with better means of participation, legitimacy, and more efficient government.

摘要

人工智能(AI)已被证明在某些领域优于人类决策。每当需要进行高级战略推理和分析大量数据以解决复杂问题时,情况尤其如此。很少有人类活动比政治更符合这一描述。在政治中,我们处理人类面临的一些最复杂的问题,必须平衡短期和长期后果,而且我们在做出决策时知道自己并不完全理解其后果。我研究了人工智能在政府领域应用的一个极端案例,并利用这个案例来审视与算法治理相关的一系列潜在危害。我关注基于政治理论考量以及人工智能技术统治论潜在危害的五点反对意见。这些反对意见基于“政治人”的理念以及参与作为合法性的前提条件、机器的非道德性以及透明度和问责制的价值。我得出结论,如果我们确保控制和备份机制到位,并且设计一个人类能够掌控社会方向和基本目标的系统,那么这些反对意见并不能成功地阻碍人工智能技术统治论。如果这里假设的政策形成的人工智能能力成为现实,这样的技术统治论理论上可能为我们提供更好的参与方式、合法性以及更高效的政府。

相似文献

1
A shallow defence of a technocracy of artificial intelligence: Examining the political harms of algorithmic governance in the domain of government.对人工智能技术统治论的浅薄辩护:审视政府领域算法治理的政治危害。
Technol Soc. 2020 Aug;62:101283. doi: 10.1016/j.techsoc.2020.101283. Epub 2020 Jun 8.
2
Politics or Technocracy - What Next for Global Health? Comment on "Navigating Between Stealth Advocacy and Unconscious Dogmatism: The Challenge of Researching the Norms, Politics and Power of Global Health".政治还是技术统治?全球健康的下一步是什么?评“在暗中鼓吹和无意识的教条主义之间航行:研究全球健康规范、政治和权力的挑战”。
Int J Health Policy Manag. 2015 Dec 12;5(3):201-4. doi: 10.15171/ijhpm.2015.209.
3
Politics by Automatic Means? A Critique of Artificial Intelligence Ethics at Work.通过自动化手段实现的政治?对工作中的人工智能伦理的批判。
Front Artif Intell. 2022 Jul 15;5:869114. doi: 10.3389/frai.2022.869114. eCollection 2022.
4
Policy evaluation and democracy: Do they fit?政策评估与民主:它们契合吗?
Eval Program Plann. 2018 Aug;69:125-129. doi: 10.1016/j.evalprogplan.2017.08.004. Epub 2017 Aug 5.
5
Political beliefs, views about technocracy, and energy and climate policy preferences.政治信仰、对技术统治论的看法以及能源和气候政策偏好。
Public Underst Sci. 2021 Apr;30(3):331-348. doi: 10.1177/0963662520978567. Epub 2020 Dec 16.
6
AI governance in the public sector: Three tales from the frontiers of automated decision-making in democratic settings.公共部门中的人工智能治理:民主环境下自动化决策前沿的三个故事。
Telecomm Policy. 2020 Jul;44(6):101976. doi: 10.1016/j.telpol.2020.101976. Epub 2020 Apr 17.
7
AI-Assisted Decision-making in Healthcare: The Application of an Ethics Framework for Big Data in Health and Research.医疗保健中的人工智能辅助决策:健康与研究领域大数据伦理框架的应用
Asian Bioeth Rev. 2019 Sep 12;11(3):299-314. doi: 10.1007/s41649-019-00096-0. eCollection 2019 Sep.
8
New and emerging technology for adult social care - the example of home sensors with artificial intelligence (AI) technology.成人社会关怀新技术——以具有人工智能 (AI) 技术的家庭传感器为例。
Health Soc Care Deliv Res. 2023 Jun;11(9):1-64. doi: 10.3310/HRYW4281.
9
Explainable AI for Bioinformatics: Methods, Tools and Applications.可解释人工智能在生物信息学中的应用:方法、工具与应用。
Brief Bioinform. 2023 Sep 20;24(5). doi: 10.1093/bib/bbad236.
10
Democracy, epistemic agency, and AI: political epistemology in times of artificial intelligence.民主、认知能动性与人工智能:人工智能时代的政治认识论
AI Ethics. 2022 Nov 22:1-10. doi: 10.1007/s43681-022-00239-4.

引用本文的文献

1
Research agenda for using artificial intelligence in health governance: interpretive scoping review and framework.卫生治理中使用人工智能的研究议程:诠释性范围综述与框架
BioData Min. 2023 Oct 31;16(1):31. doi: 10.1186/s13040-023-00346-w.

本文引用的文献

1
The Moral Choice Machine.道德选择机器
Front Artif Intell. 2020 May 20;3:36. doi: 10.3389/frai.2020.00036. eCollection 2020.
2
Transparent, explainable, and accountable AI for robotics.用于机器人技术的透明、可解释和可问责的人工智能。
Sci Robot. 2017 May 31;2(6). doi: 10.1126/scirobotics.aan6080.
3
Algorithmic Accountability and Public Reason.算法问责与公共理性。
Philos Technol. 2018;31(4):543-556. doi: 10.1007/s13347-017-0263-5. Epub 2017 May 24.
4
The Moral Machine experiment.道德机器实验。
Nature. 2018 Nov;563(7729):59-64. doi: 10.1038/s41586-018-0637-6. Epub 2018 Oct 24.
5
Science as a Vocation in the Era of Big Data: the Philosophy of Science behind Big Data and humanity's Continued Part in Science.大数据时代的科学作为一种职业:大数据背后的科学哲学以及人类在科学中的持续角色。
Integr Psychol Behav Sci. 2018 Dec;52(4):508-522. doi: 10.1007/s12124-018-9447-5.