Suppr超能文献

人工智能系统中的算法政治偏见

Algorithmic Political Bias in Artificial Intelligence Systems.

作者信息

Peters Uwe

机构信息

Center for Science and Thought, University of Bonn, Bonn, Germany.

Leverhulme Centre for the Future of Intelligence, University of Cambridge, Cambridge, UK.

出版信息

Philos Technol. 2022;35(2):25. doi: 10.1007/s13347-022-00512-8. Epub 2022 Mar 30.

Abstract

Some artificial intelligence (AI) systems can display algorithmic bias, i.e. they may produce outputs that unfairly discriminate against people based on their social identity. Much research on this topic focuses on algorithmic bias that disadvantages people based on their gender or racial identity. The related ethical problems are significant and well known. Algorithmic bias against other aspects of people's social identity, for instance, their political orientation, remains largely unexplored. This paper argues that algorithmic bias against people's political orientation can arise in some of the same ways in which algorithmic gender and racial biases emerge. However, it differs importantly from them because there are (in a democratic society) strong social norms against gender and racial biases. This does not hold to the same extent for political biases. Political biases can thus more powerfully influence people, which increases the chances that these biases become embedded in algorithms and makes algorithmic political biases harder to detect and eradicate than gender and racial biases even though they all can produce similar harm. Since some algorithms can now also easily identify people's political orientations against their will, these problems are exacerbated. Algorithmic political bias thus raises substantial and distinctive risks that the AI community should be aware of and examine.

摘要

一些人工智能(AI)系统可能会表现出算法偏差,也就是说,它们可能会产生基于人们的社会身份而对其进行不公平歧视的输出结果。关于这一主题的许多研究都集中在基于性别或种族身份而对人们不利的算法偏差上。相关的伦理问题很严重且广为人知。针对人们社会身份其他方面的算法偏差,比如他们的政治倾向,在很大程度上仍未得到探索。本文认为,针对人们政治倾向的算法偏差可能会以与算法性别和种族偏差出现的一些相同方式产生。然而,它与它们有重要区别,因为(在民主社会中)存在针对性别和种族偏差的强大社会规范。政治偏差在这方面的情况并非如此。因此,政治偏差可能会更有力地影响人们,这增加了这些偏差嵌入算法的可能性,并且使得算法政治偏差比性别和种族偏差更难被发现和消除,尽管它们都会造成类似的危害。由于现在一些算法还能轻易违背人们的意愿识别出他们的政治倾向,这些问题就更加严重了。算法政治偏差因此带来了人工智能社区应该意识到并加以审视的重大且独特的风险。

相似文献

1
Algorithmic Political Bias in Artificial Intelligence Systems.人工智能系统中的算法政治偏见
Philos Technol. 2022;35(2):25. doi: 10.1007/s13347-022-00512-8. Epub 2022 Mar 30.
3
Ethics and governance of trustworthy medical artificial intelligence.可信医疗人工智能的伦理与治理。
BMC Med Inform Decis Mak. 2023 Jan 13;23(1):7. doi: 10.1186/s12911-023-02103-9.
5
Algorithms are not neutral: Bias in collaborative filtering.算法并非中立:协同过滤中的偏差。
AI Ethics. 2022;2(4):763-770. doi: 10.1007/s43681-022-00136-w. Epub 2022 Jan 31.

引用本文的文献

7
9
Artificial Intelligence in Medicine and Dentistry.医学与牙医学中的人工智能
Acta Stomatol Croat. 2023 Mar;57(1):70-84. doi: 10.15644/asc57/1/8.

本文引用的文献

4
Political sectarianism in America.美国的政治宗派主义。
Science. 2020 Oct 30;370(6516):533-536. doi: 10.1126/science.abe1715.
5
Emerging from AI utopia.走出人工智能乌托邦。
Science. 2020 Apr 3;368(6486):9. doi: 10.1126/science.abb9369.
6
Bald and Bad?光头就不好吗?
Exp Psychol. 2019 Sep;66(5):331-345. doi: 10.1027/1618-3169/a000457. Epub 2019 Oct 11.
9
Artificial intelligence in healthcare: past, present and future.人工智能在医疗保健中的应用:过去、现在和未来。
Stroke Vasc Neurol. 2017 Jun 21;2(4):230-243. doi: 10.1136/svn-2017-000101. eCollection 2017 Dec.

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验