Eindhoven University of Technology, Eindhoven, The Netherlands.
Sci Eng Ethics. 2018 Aug;24(4):1201-1219. doi: 10.1007/s11948-017-9943-x. Epub 2017 Jul 18.
Many ethicists writing about automated systems (e.g. self-driving cars and autonomous weapons systems) attribute agency to these systems. Not only that; they seemingly attribute an autonomous or independent form of agency to these machines. This leads some ethicists to worry about responsibility-gaps and retribution-gaps in cases where automated systems harm or kill human beings. In this paper, I consider what sorts of agency it makes sense to attribute to most current forms of automated systems, in particular automated cars and military robots. I argue that whereas it indeed makes sense to attribute different forms of fairly sophisticated agency to these machines, we ought not to regard them as acting on their own, independently of any human beings. Rather, the right way to understand the agency exercised by these machines is in terms of human-robot collaborations, where the humans involved initiate, supervise, and manage the agency of their robotic collaborators. This means, I argue, that there is much less room for justified worries about responsibility-gaps and retribution-gaps than many ethicists think.
许多伦理学家在撰写关于自动化系统(例如自动驾驶汽车和自主武器系统)的文章时,会将主体地位赋予这些系统。不仅如此;他们似乎还赋予了这些机器一种自主或独立的主体地位。这使得一些伦理学家担心在自动化系统伤害或杀害人类的情况下会出现责任差距和报应差距。在本文中,我考虑了将何种主体地位赋予当前大多数形式的自动化系统是有意义的,特别是自动驾驶汽车和军事机器人。我认为,虽然确实可以将不同形式的相当复杂的主体地位赋予这些机器,但我们不应该认为它们是在没有任何人类干预的情况下独立行动的。相反,理解这些机器所行使的主体地位的正确方式是在人机协作中,其中涉及的人类发起、监督和管理其机器人协作者的主体地位。我认为,这意味着,对于许多伦理学家所担心的责任差距和报应差距问题,并没有那么多合理的理由。