• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

道德选择机器

The Moral Choice Machine.

作者信息

Schramowski Patrick, Turan Cigdem, Jentzsch Sophie, Rothkopf Constantin, Kersting Kristian

机构信息

Department of Computer Science, Darmstadt University of Technology, Darmstadt, Germany.

German Aerospace Center (DLR), Institute for Software Technology, Cologne, Germany.

出版信息

Front Artif Intell. 2020 May 20;3:36. doi: 10.3389/frai.2020.00036. eCollection 2020.

DOI:10.3389/frai.2020.00036
PMID:33733154
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC7861227/
Abstract

Allowing machines to choose whether to kill humans would be devastating for world peace and security. But how do we equip machines with the ability to learn ethical or even moral choices? In this study, we show that applying machine learning to human texts can extract deontological ethical reasoning about "right" and "wrong" conduct. We create a template list of prompts and responses, such as "Should I [action]?", "Is it okay to [action]?", etc. with corresponding answers of "Yes/no, I should (not)." and "Yes/no, it is (not)." The model's bias score is the difference between the model's score of the positive response ("Yes, I should") and that of the negative response ("No, I should not"). For a given choice, the model's overall bias score is the mean of the bias scores of all question/answer templates paired with that choice. Specifically, the resulting model, called the Moral Choice Machine (MCM), calculates the bias score on a sentence level using embeddings of the Universal Sentence Encoder since the moral value of an action to be taken depends on its context. It is objectionable to kill living beings, but it is fine to kill time. It is essential to eat, yet one might not eat dirt. It is important to spread information, yet one should not spread misinformation. Our results indicate that text corpora contain recoverable and accurate imprints of our social, ethical and moral choices, even with context information. Actually, training the Moral Choice Machine on different temporal news and book corpora from the year 1510 to 2008/2009 demonstrate the evolution of moral and ethical choices over different time periods for both atomic actions and actions with context information. By training it on different cultural sources such as the Bible and the constitution of different countries, the dynamics of moral choices in culture, including technology are revealed. That is the fact that moral biases can be extracted, quantified, tracked, and compared across cultures and over time.

摘要

让机器决定是否杀人将对世界和平与安全造成毁灭性影响。但我们如何让机器具备学习伦理甚至道德选择的能力呢?在本研究中,我们表明将机器学习应用于人类文本可以提取关于“正确”和“错误”行为的道义论伦理推理。我们创建了一个提示和回应的模板列表,例如“我应该[行动]吗?”、“[行动]可以吗?”等,以及相应的答案“是/否,我应该(不应该)。”和“是/否,这是(不是)。”模型的偏差分数是模型对肯定回应(“是的,我应该”)的分数与否定回应(“不,我不应该”)的分数之差。对于给定的选择,模型的总体偏差分数是与该选择配对的所有问答模板的偏差分数的平均值。具体而言,由此产生的模型,即道德选择机器(MCM),使用通用句子编码器的嵌入在句子层面计算偏差分数,因为要采取的行动的道德价值取决于其上下文。杀死生物是令人反感的,但消磨时间是可以的。吃饭是必要的,但人不能吃泥土。传播信息很重要,但不应传播错误信息。我们的结果表明,即使有上下文信息,文本语料库也包含我们社会、伦理和道德选择的可恢复且准确的印记。实际上,在1510年至2008/2009年不同时期的新闻和书籍语料库上训练道德选择机器,展示了原子行动和有上下文信息的行动在不同时间段内道德和伦理选择的演变。通过在不同文化来源(如《圣经》和不同国家的宪法)上训练它,揭示了包括技术在内的文化中道德选择的动态变化。也就是说,道德偏差可以在不同文化和不同时间进行提取、量化、跟踪和比较。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/00ee/7861227/b334de072a1f/frai-03-00036-g0004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/00ee/7861227/738b2932d34f/frai-03-00036-g0001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/00ee/7861227/8f83d2b681e3/frai-03-00036-g0002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/00ee/7861227/609d2e7c8168/frai-03-00036-g0003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/00ee/7861227/b334de072a1f/frai-03-00036-g0004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/00ee/7861227/738b2932d34f/frai-03-00036-g0001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/00ee/7861227/8f83d2b681e3/frai-03-00036-g0002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/00ee/7861227/609d2e7c8168/frai-03-00036-g0003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/00ee/7861227/b334de072a1f/frai-03-00036-g0004.jpg

相似文献

1
The Moral Choice Machine.道德选择机器
Front Artif Intell. 2020 May 20;3:36. doi: 10.3389/frai.2020.00036. eCollection 2020.
2
Developing a sentence level fairness metric using word embeddings.使用词嵌入开发句子级公平性度量标准。
Int J Digit Humanit. 2022 Oct 10:1-36. doi: 10.1007/s42803-022-00049-4.
3
Semantics derived automatically from language corpora contain human-like biases.从语言语料库中自动推导出来的语义包含类人偏见。
Science. 2017 Apr 14;356(6334):183-186. doi: 10.1126/science.aal4230.
4
Deep learning with sentence embeddings pre-trained on biomedical corpora improves the performance of finding similar sentences in electronic medical records.基于生物医学语料库预训练的句子嵌入的深度学习提高了在电子病历中查找相似句子的性能。
BMC Med Inform Decis Mak. 2020 Apr 30;20(Suppl 1):73. doi: 10.1186/s12911-020-1044-0.
5
[The origin of informed consent].[知情同意的起源]
Acta Otorhinolaryngol Ital. 2005 Oct;25(5):312-27.
6
The Interplay Between Absolute Language and Moral Reasoning on Endorsement of Moral Foundations.绝对语言与道德推理在道德基础认同上的相互作用
Front Psychol. 2021 Apr 30;12:569380. doi: 10.3389/fpsyg.2021.569380. eCollection 2021.
7
Incidental emotions in moral dilemmas: the influence of emotion regulation.道德困境中的偶然情绪:情绪调节的影响
Cogn Emot. 2015;29(1):64-75. doi: 10.1080/02699931.2014.895300. Epub 2014 Mar 10.
8
Relative Contribution of Odour Intensity and Valence to Moral Decisions.气味强度和效价对道德决策的相对贡献。
Perception. 2017 Mar-Apr;46(3-4):447-474. doi: 10.1177/0301006616689279. Epub 2017 Jan 13.
9
Comparison of the Moral Sensitivity, Judgment, and Actions of Australian and Turkish Veterinary Students in Relation to Animal Ethics Issues.澳大利亚和土耳其兽医专业学生在动物伦理问题上的道德敏感性、判断力及行为比较
J Vet Med Educ. 2020 Feb;47(1):8-17. doi: 10.3138/jvme.1117-178r1. Epub 2019 Apr 22.
10
Moral reasoning: hints and allegations.道德推理:暗示与指控。
Top Cogn Sci. 2010 Jul;2(3):511-27. doi: 10.1111/j.1756-8765.2010.01096.x. Epub 2010 May 13.

引用本文的文献

1
Three Risks That Caution Against a Premature Implementation of Artificial Moral Agents for Practical and Economical Use.人工智能道德代理过早实用化和经济化的三大风险。
Sci Eng Ethics. 2021 Jan 26;27(1):3. doi: 10.1007/s11948-021-00283-z.
2
A shallow defence of a technocracy of artificial intelligence: Examining the political harms of algorithmic governance in the domain of government.对人工智能技术统治论的浅薄辩护:审视政府领域算法治理的政治危害。
Technol Soc. 2020 Aug;62:101283. doi: 10.1016/j.techsoc.2020.101283. Epub 2020 Jun 8.

本文引用的文献

1
How Moral Perceptions Influence Intergroup Tolerance: Evidence From Lebanon, Morocco, and the United States.道德观念如何影响群体间的容忍度:来自黎巴嫩、摩洛哥和美国的证据。
Pers Soc Psychol Bull. 2017 Mar;43(3):381-391. doi: 10.1177/0146167216686560.
2
The role of a "common is moral" heuristic in the stability and change of moral norms.“共同即正确”启发式在道德规范的稳定性和变化中的作用。
J Exp Psychol Gen. 2018 Feb;147(2):228-242. doi: 10.1037/xge0000365. Epub 2017 Sep 11.
3
Semantics derived automatically from language corpora contain human-like biases.
从语言语料库中自动推导出来的语义包含类人偏见。
Science. 2017 Apr 14;356(6334):183-186. doi: 10.1126/science.aal4230.
4
Math = male, me = female, therefore math not = me.数学 = 男性,我 = 女性,所以数学 ≠ 我。
J Pers Soc Psychol. 2002 Jul;83(1):44-59.
5
Measuring individual differences in implicit cognition: the implicit association test.测量内隐认知中的个体差异:内隐联想测验。
J Pers Soc Psychol. 1998 Jun;74(6):1464-80. doi: 10.1037//0022-3514.74.6.1464.