道德机器实验。

The Moral Machine experiment.

机构信息

The Media Lab, Massachusetts Institute of Technology, Cambridge, MA, USA.

Department of Human Evolutionary Biology, Harvard University, Cambridge, MA, USA.

出版信息

Nature. 2018 Nov;563(7729):59-64. doi: 10.1038/s41586-018-0637-6. Epub 2018 Oct 24.

Abstract

With the rapid development of artificial intelligence have come concerns about how machines will make moral decisions, and the major challenge of quantifying societal expectations about the ethical principles that should guide machine behaviour. To address this challenge, we deployed the Moral Machine, an online experimental platform designed to explore the moral dilemmas faced by autonomous vehicles. This platform gathered 40 million decisions in ten languages from millions of people in 233 countries and territories. Here we describe the results of this experiment. First, we summarize global moral preferences. Second, we document individual variations in preferences, based on respondents' demographics. Third, we report cross-cultural ethical variation, and uncover three major clusters of countries. Fourth, we show that these differences correlate with modern institutions and deep cultural traits. We discuss how these preferences can contribute to developing global, socially acceptable principles for machine ethics. All data used in this article are publicly available.

摘要

随着人工智能的快速发展,人们开始关注机器将如何做出道德决策,以及量化社会对指导机器行为的伦理原则的期望的主要挑战。为了解决这一挑战,我们部署了道德机器,这是一个在线实验平台,旨在探索自动驾驶汽车面临的道德困境。该平台收集了来自 233 个国家和地区的数百万人用十种语言做出的 4000 万次决策。在这里,我们描述了该实验的结果。首先,我们总结了全球的道德偏好。其次,我们根据受访者的人口统计学记录了偏好的个体差异。第三,我们报告了跨文化的伦理差异,并揭示了三个主要的国家群体。第四,我们表明这些差异与现代制度和深层文化特征相关。我们讨论了这些偏好如何有助于为机器伦理制定全球可接受的社会原则。本文中使用的所有数据都是公开可用的。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索