Peters Uwe
Center for Science and Thought, University of Bonn, Bonn, Germany.
Leverhulme Centre for the Future of Intelligence, University of Cambridge, Cambridge, UK.
Philos Technol. 2022;35(2):25. doi: 10.1007/s13347-022-00512-8. Epub 2022 Mar 30.
Some artificial intelligence (AI) systems can display algorithmic bias, i.e. they may produce outputs that unfairly discriminate against people based on their social identity. Much research on this topic focuses on algorithmic bias that disadvantages people based on their gender or racial identity. The related ethical problems are significant and well known. Algorithmic bias against other aspects of people's social identity, for instance, their political orientation, remains largely unexplored. This paper argues that algorithmic bias against people's political orientation can arise in some of the same ways in which algorithmic gender and racial biases emerge. However, it differs importantly from them because there are (in a democratic society) strong social norms against gender and racial biases. This does not hold to the same extent for political biases. Political biases can thus more powerfully influence people, which increases the chances that these biases become embedded in algorithms and makes algorithmic political biases harder to detect and eradicate than gender and racial biases even though they all can produce similar harm. Since some algorithms can now also easily identify people's political orientations against their will, these problems are exacerbated. Algorithmic political bias thus raises substantial and distinctive risks that the AI community should be aware of and examine.
一些人工智能(AI)系统可能会表现出算法偏差,也就是说,它们可能会产生基于人们的社会身份而对其进行不公平歧视的输出结果。关于这一主题的许多研究都集中在基于性别或种族身份而对人们不利的算法偏差上。相关的伦理问题很严重且广为人知。针对人们社会身份其他方面的算法偏差,比如他们的政治倾向,在很大程度上仍未得到探索。本文认为,针对人们政治倾向的算法偏差可能会以与算法性别和种族偏差出现的一些相同方式产生。然而,它与它们有重要区别,因为(在民主社会中)存在针对性别和种族偏差的强大社会规范。政治偏差在这方面的情况并非如此。因此,政治偏差可能会更有力地影响人们,这增加了这些偏差嵌入算法的可能性,并且使得算法政治偏差比性别和种族偏差更难被发现和消除,尽管它们都会造成类似的危害。由于现在一些算法还能轻易违背人们的意愿识别出他们的政治倾向,这些问题就更加严重了。算法政治偏差因此带来了人工智能社区应该意识到并加以审视的重大且独特的风险。