Ethical Innovation Hub, Institute for Electrical Engineering in Medicine, University of Lübeck, Ratzeburger Allee 160, 23562, Lübeck, Germany.
Sci Eng Ethics. 2021 Jan 26;27(1):3. doi: 10.1007/s11948-021-00283-z.
In the present article, I will advocate caution against developing artificial moral agents (AMAs) based on the notion that the utilization of preliminary forms of AMAs will potentially negatively feed back on the human social system and on human moral thought itself and its value-e.g., by reinforcing social inequalities, diminishing the breadth of employed ethical arguments and the value of character. While scientific investigations into AMAs pose no direct significant threat, I will argue against their premature utilization for practical and economical use. I will base my arguments on two thought experiments. The first thought experiment deals with the potential to generate a replica of an individual's moral stances with the purpose to increase, what I term, 'moral efficiency'. Hence, as a first risk, an unregulated utilization of premature AMAs in a neoliberal capitalist system is likely to disadvantage those who cannot afford 'moral replicas' and further reinforce social inequalities. The second thought experiment deals with the idea of a 'moral calculator'. As a second risk, I will argue that, even as a device equally accessible to all and aimed at augmenting human moral deliberation, 'moral calculators' as preliminary forms of AMAs are likely to diminish the breadth and depth of concepts employed in moral arguments. Again, I base this claim on the idea that the current most dominant economic system rewards increases in productivity. However, increases in efficiency will mostly stem from relying on the outputs of 'moral calculators' without further scrutiny. Premature AMAs will cover only a limited scope of moral argumentation and, hence, over-reliance on them will narrow human moral thought. In addition and as the third risk, I will argue that an increased disregard of the interior of a moral agent may ensue-a trend that can already be observed in the literature.
在本文中,我将主张谨慎开发人工智能(AMAs),理由是初步形式的 AMAs 的利用可能会对人类社会系统和人类道德思想本身及其价值产生负面影响,例如通过强化社会不平等、减少所采用的伦理论点的广度和品格的价值。虽然对 AMAs 的科学研究没有直接的重大威胁,但我将反对为了实际和经济目的而过早地利用它们。我的论点将基于两个思想实验。第一个思想实验涉及到为了提高我所说的“道德效率”而生成个人道德立场副本的潜力。因此,作为第一个风险,在新自由主义资本主义系统中不受监管地利用不成熟的 AMAs 可能会使那些买不起“道德副本”的人处于不利地位,并进一步强化社会不平等。第二个思想实验涉及“道德计算器”的想法。作为第二个风险,我将论证,即使作为一种对所有人平等适用且旨在增强人类道德思考的设备,作为初步形式的 AMAs 的“道德计算器”也可能会减少道德论点中所采用的概念的广度和深度。同样,我的论点基于这样一种观点,即当前最占主导地位的经济体系奖励生产力的提高。然而,效率的提高主要将来自于依赖“道德计算器”的输出,而无需进一步审查。不成熟的 AMAs 将只涵盖有限的道德论证范围,因此过度依赖它们将使人类的道德思想变得狭隘。此外,作为第三个风险,我将论证可能会出现对道德主体内在的更大忽视——这种趋势在文献中已经可以观察到。