Giubilini Alberto, Savulescu Julian
1Oxford Martin School, University of Oxford, Oxford, UK.
2Uehiro Centre for Practical Ethics, University of Oxford, Oxford, UK.
Philos Technol. 2018;31(2):169-188. doi: 10.1007/s13347-017-0285-z. Epub 2017 Dec 8.
We describe a form of moral artificial intelligence that could be used to improve human moral decision-making. We call it the "artificial moral advisor" (AMA). The AMA would implement a -relativistic version of the "ideal observer" famously described by Roderick Firth. We describe similarities and differences between the AMA and Firth's ideal observer. Like Firth's ideal observer, the AMA is disinterested, dispassionate, and consistent in its judgments. Unlike Firth's observer, the AMA is non-absolutist, because it would take into account the human agent's own principles and values. We argue that the AMA would respect and indeed enhance individuals' moral autonomy, help individuals achieve wide and a narrow reflective equilibrium, make up for the limitations of human moral psychology in a way that takes conservatives' objections to human bioenhancement seriously, and implement the positive functions of intuitions and emotions in human morality without their downsides, such as biases and prejudices.
我们描述了一种可用于改善人类道德决策的道德人工智能形式。我们将其称为“人工道德顾问”(AMA)。AMA将实施罗德里克·弗思著名描述的“理想观察者”的相对主义版本。我们阐述了AMA与弗思的理想观察者之间的异同。与弗思的理想观察者一样,AMA公正无私、冷静客观且判断一致。与弗思的观察者不同,AMA不是绝对主义的,因为它会考虑人类主体自身的原则和价值观。我们认为,AMA将尊重并切实增强个体的道德自主性,帮助个体实现广泛和狭义的反思平衡,以认真对待保守派反对人类生物增强的方式弥补人类道德心理的局限性,并发挥直觉和情感在人类道德中的积极作用而不产生其负面影响,如偏见和成见。