Martin Dominic
John Molson School of Business, Concordia University, Montréal, Canada.
Department of Philosophy, McGill University, Montréal, Canada.
Sci Eng Ethics. 2017 Aug;23(4):951-967. doi: 10.1007/s11948-016-9833-7. Epub 2016 Nov 30.
Who should decide how a machine will decide what to do when it is driving a car, performing a medical procedure, or, more generally, when it is facing any kind of morally laden decision? More and more, machines are making complex decisions with a considerable level of autonomy. We should be much more preoccupied by this problem than we currently are. After a series of preliminary remarks, this paper will go over four possible answers to the question raised above. First, we may claim that it is the maker of a machine that gets to decide how it will behave in morally laden scenarios. Second, we may claim that the users of a machine should decide. Third, that decision may have to be made collectively or, fourth, by other machines built for this special purpose. The paper argues that each of these approaches suffers from its own shortcomings, and it concludes by showing, among other things, which approaches should be emphasized for different types of machines, situations, and/or morally laden decisions.
当机器驾驶汽车、执行医疗程序,或者更一般地说,当它面临任何一种充满道德考量的决策时,应该由谁来决定机器将如何决定做什么?机器越来越多地在相当程度上自主地做出复杂决策。我们应该比目前更加关注这个问题。经过一系列初步说明后,本文将探讨上述问题的四种可能答案。第一,我们可以声称,机器的制造者有权决定它在充满道德考量的场景中将如何表现。第二,我们可以声称应该由机器的使用者来决定。第三,这个决定可能必须集体做出,或者第四,由为此特殊目的而制造的其他机器来做出。本文认为这些方法都有其自身的缺点,并通过说明,除其他外,针对不同类型的机器、情况和/或充满道德考量的决策应该强调哪些方法来得出结论。