• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

相似文献

1
AI Moral Enhancement: Upgrading the Socio-Technical System of Moral Engagement.人工智能道德增强:升级道德参与的社会技术系统。
Sci Eng Ethics. 2023 Mar 23;29(2):11. doi: 10.1007/s11948-023-00428-2.
2
Toward Implementing the ADC Model of Moral Judgment in Autonomous Vehicles.迈向自动驾驶汽车中道德判断的 ADC 模型的实现。
Sci Eng Ethics. 2020 Oct;26(5):2461-2472. doi: 10.1007/s11948-020-00242-0.
3
Attributions toward artificial agents in a modified Moral Turing Test.在改良的道德图灵测试中对人工代理的归因。
Sci Rep. 2024 Apr 30;14(1):8458. doi: 10.1038/s41598-024-58087-7.
4
Why a Virtual Assistant for Moral Enhancement When We Could have a Socrates?为什么我们需要一个道德增强的虚拟助手,而不是一个苏格拉底呢?
Sci Eng Ethics. 2021 Jun 29;27(4):42. doi: 10.1007/s11948-021-00318-5.
5
The Puzzle of Evaluating Moral Cognition in Artificial Agents.人工智能中道德认知评估的难题。
Cogn Sci. 2023 Aug;47(8):e13315. doi: 10.1111/cogs.13315.
6
Not in my AI: Moral engagement and disengagement in health care AI development.非我所能:医疗人工智能发展中的道德介入和抽离
Pac Symp Biocomput. 2023;28:496-506.
7
How Could We Know When a Robot was a Moral Patient?当机器人成为道德病人时,我们如何知道?
Camb Q Healthc Ethics. 2021 Jul;30(3):459-471. doi: 10.1017/S0963180120001012.
8
Artificial Moral Responsibility: How We Can and Cannot Hold Machines Responsible.人工道德责任:我们能够且不能如何让机器负责。
Camb Q Healthc Ethics. 2021 Jul;30(3):435-447. doi: 10.1017/S0963180120000985.
9
Psychological and Brain Responses to Artificial Intelligence's Violation of Community Ethics.人工智能违反社区伦理时的心理和大脑反应。
Cyberpsychol Behav Soc Netw. 2024 Aug;27(8):562-570. doi: 10.1089/cyber.2023.0524. Epub 2024 May 17.
10
ChatGPT's inconsistent moral advice influences users' judgment.ChatGPT 给出的前后不一致的道德建议会影响用户的判断。
Sci Rep. 2023 Apr 6;13(1):4569. doi: 10.1038/s41598-023-31341-0.

引用本文的文献

1
Rage against the authority machines: how to design artificial moral advisors for moral enhancement.对权威机器的愤怒:如何设计用于道德提升的人工道德顾问。
AI Soc. 2025;40(4):2237-2248. doi: 10.1007/s00146-024-02135-3. Epub 2024 Nov 30.
2
AI-assisted ethics? considerations of AI simulation for the ethical assessment and design of assistive technologies.人工智能辅助伦理?关于用于辅助技术伦理评估与设计的人工智能模拟的思考。
Front Genet. 2023 Jun 26;14:1039839. doi: 10.3389/fgene.2023.1039839. eCollection 2023.

人工智能道德增强:升级道德参与的社会技术系统。

AI Moral Enhancement: Upgrading the Socio-Technical System of Moral Engagement.

机构信息

Department of Philosophy, Southern Connecticut State University, 501 Crescent Street, New Haven, CT, 06515, USA.

Department of Philosophy, Maastricht University, FASoS, Grote Gracht 90-92, 6211 PG, Maastricht, The Netherlands.

出版信息

Sci Eng Ethics. 2023 Mar 23;29(2):11. doi: 10.1007/s11948-023-00428-2.

DOI:10.1007/s11948-023-00428-2
PMID:36952140
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC10036265/
Abstract

Several proposals for moral enhancement would use AI to augment (auxiliary enhancement) or even supplant (exhaustive enhancement) human moral reasoning or judgment. Exhaustive enhancement proposals conceive AI as some self-contained oracle whose superiority to our own moral abilities is manifest in its ability to reliably deliver the 'right' answers to all our moral problems. We think this is a mistaken way to frame the project, as it presumes that we already know many things that we are still in the process of working out, and reflecting on this fact reveals challenges even for auxiliary proposals that eschew the oracular approach. We argue there is nonetheless a substantial role that 'AI mentors' could play in our moral education and training. Expanding on the idea of an AI Socratic Interlocutor, we propose a modular system of multiple AI interlocutors with their own distinct points of view reflecting their training in a diversity of concrete wisdom traditions. This approach minimizes any risk of moral disengagement, while the existence of multiple modules from a diversity of traditions ensures pluralism is preserved. We conclude with reflections on how all this relates to the broader notion of moral transcendence implicated in the project of AI moral enhancement, contending it is precisely the whole concrete socio-technical system of moral engagement that we need to model if we are to pursue moral enhancement.

摘要

有几项道德增强的提议将利用人工智能来增强(辅助增强)甚至取代(全面增强)人类的道德推理或判断。全面增强的提议将人工智能设想为一个自给自足的神谕,其在可靠地为我们所有的道德问题提供“正确”答案方面的优越性,明显优于我们自己的道德能力。我们认为这是一种错误的框架方式,因为它假定我们已经知道了许多我们仍在努力解决的事情,而反思这一事实甚至对那些回避神谕方法的辅助提议也提出了挑战。我们认为,“人工智能导师”在我们的道德教育和培训中仍然可以发挥重要作用。我们扩展了人工智能苏格拉底对话者的概念,提出了一个由多个具有不同观点的人工智能对话者组成的模块化系统,反映了他们在多样性的具体智慧传统中的培训。这种方法最大限度地降低了道德脱离的风险,而来自不同传统的多个模块的存在确保了多元主义得以保留。最后,我们思考了这一切与人工智能道德增强项目中所涉及的更广泛的道德超越概念的关系,认为如果我们要追求道德增强,就需要对整个具体的社会技术道德参与系统进行建模。