Harris John
Camb Q Healthc Ethics. 2020 Jan;29(1):71-79. doi: 10.1017/S096318011900080X.
In a recent paper in Nature1 entitled The Moral Machine Experiment, Edmond Awad, et al. make a number of breathtakingly reckless assumptions, both about the decisionmaking capacities of current so-called "autonomous vehicles" and about the nature of morality and the law. Accepting their bizarre premise that the holy grail is to find out how to obtain cognizance of public morality and then program driverless vehicles accordingly, the following are the four steps to the Moral Machinists argument:1)Find out what "public morality" will prefer to see happen.2)On the basis of this discovery, claim both popular acceptance of the preferences and persuade would-be owners and manufacturers that the vehicles are programmed with the best solutions to any survival dilemmas they might face.3)Citizen agreement thus characterized is then presumed to deliver moral license for the chosen preferences.4)This yields "permission" to program vehicles to spare or condemn those outside the vehicles when their deaths will preserve vehicle and occupants.This paper argues that the Moral Machine Experiment fails dramatically on all four counts.
在《自然》杂志最近发表的一篇题为《道德机器实验》的论文中,埃德蒙·阿瓦德等人做出了一些极其草率的假设,这些假设既涉及当前所谓“自动驾驶汽车”的决策能力,也涉及道德和法律的本质。接受他们那个怪异的前提,即圣杯是找出如何知晓公众道德,然后据此对无人驾驶车辆进行编程,以下是道德机器论者论证的四个步骤:1)弄清楚“公众道德”更希望看到怎样的结果。2)基于这一发现,宣称这些偏好得到了大众认可,并说服潜在车主和制造商,车辆已针对它们可能面临的任何生存困境进行了最佳解决方案的编程。3)如此定义的公民共识随后被假定为所选偏好提供了道德许可。4)这就产生了“许可”,可以对车辆进行编程,以便在车外人员的死亡能保护车辆及车内人员时,选择牺牲或谴责他们。本文认为,道德机器实验在这四个方面都彻底失败了。