Media Lab, Massachusetts Institute of Technology, Cambridge, MA, USA.
Department of Economics, University of Exeter Business School, Exeter, UK.
Nat Hum Behav. 2020 Feb;4(2):134-143. doi: 10.1038/s41562-019-0762-8. Epub 2019 Oct 28.
When an automated car harms someone, who is blamed by those who hear about it? Here we asked human participants to consider hypothetical cases in which a pedestrian was killed by a car operated under shared control of a primary and a secondary driver and to indicate how blame should be allocated. We find that when only one driver makes an error, that driver is blamed more regardless of whether that driver is a machine or a human. However, when both drivers make errors in cases of human-machine shared-control vehicles, the blame attributed to the machine is reduced. This finding portends a public under-reaction to the malfunctioning artificial intelligence components of automated cars and therefore has a direct policy implication: allowing the de facto standards for shared-control vehicles to be established in courts by the jury system could fail to properly regulate the safety of those vehicles; instead, a top-down scheme (through federal laws) may be called for.
当自动驾驶汽车致人受伤时,人们会指责谁?我们在这里请人类参与者考虑这样一种假设情况:一名行人被一辆由主驾驶员和副驾驶员共同操控的汽车撞死,并表明应如何分配责任。我们发现,当只有一名驾驶员犯错时,无论该驾驶员是机器还是人类,该驾驶员都会受到更多的指责。但是,在人机共享控制车辆的情况下,如果两名驾驶员都犯了错误,则归咎于机器的责任会减轻。这一发现预示着公众对自动驾驶汽车中人工智能组件故障的反应不足,因此具有直接的政策意义:允许陪审团制度在法院中为共享控制车辆制定事实上的标准,可能无法正确监管这些车辆的安全;相反,可能需要采取自上而下的方案(通过联邦法律)。