Sætra Henrik Skaug
Østfold University College, Remmen, 1757, Halden, Norway.
Technol Soc. 2020 Aug;62:101283. doi: 10.1016/j.techsoc.2020.101283. Epub 2020 Jun 8.
Artificial intelligence (AI) has proven to be superior to human decision-making in certain areas. This is particularly the case whenever there is a need for advanced strategic reasoning and analysis of vast amounts of data in order to solve complex problems. Few human activities fit this description better than politics. In politics we deal with some of the most complex issues humans face, short-term and long-term consequences have to be balanced, and we make decisions knowing that we do not fully understand their consequences. I examine an extreme case of the application of AI in the domain of government, and use this case to examine a subset of the potential harms associated with algorithmic governance. I focus on five objections based on political theoretical considerations and the potential harms of an AI technocracy. These are objections based on the ideas of 'political man' and participation as a prerequisite for legitimacy, the non-morality of machines and the value of transparency and accountability. I conclude that these objections do not successfully derail AI technocracy, if we make sure that mechanisms for control and backup are in place, and if we design a system in which humans have control over the direction and fundamental goals of society. Such a technocracy, if the AI capabilities of policy formation here assumed becomes reality, may, in theory, provide us with better means of participation, legitimacy, and more efficient government.
人工智能(AI)已被证明在某些领域优于人类决策。每当需要进行高级战略推理和分析大量数据以解决复杂问题时,情况尤其如此。很少有人类活动比政治更符合这一描述。在政治中,我们处理人类面临的一些最复杂的问题,必须平衡短期和长期后果,而且我们在做出决策时知道自己并不完全理解其后果。我研究了人工智能在政府领域应用的一个极端案例,并利用这个案例来审视与算法治理相关的一系列潜在危害。我关注基于政治理论考量以及人工智能技术统治论潜在危害的五点反对意见。这些反对意见基于“政治人”的理念以及参与作为合法性的前提条件、机器的非道德性以及透明度和问责制的价值。我得出结论,如果我们确保控制和备份机制到位,并且设计一个人类能够掌控社会方向和基本目标的系统,那么这些反对意见并不能成功地阻碍人工智能技术统治论。如果这里假设的政策形成的人工智能能力成为现实,这样的技术统治论理论上可能为我们提供更好的参与方式、合法性以及更高效的政府。