Suppr超能文献

服务人类需求的人工智能:迈向半自主智能体伦理的务实初步措施。

Artificial Intelligence in Service of Human Needs: Pragmatic First Steps Toward an Ethics for Semi-Autonomous Agents.

作者信息

Rieder Travis N, Hutler Brian, Mathews Debra J H

机构信息

Berman Institute of Bioethics, Johns Hopkins University.

出版信息

AJOB Neurosci. 2020 Apr-Jun;11(2):120-127. doi: 10.1080/21507740.2020.1740354.

Abstract

The ethics of robots and artificial intelligence (AI) typically centers on "giving ethics" to as-yet imaginary AI with human-levels of autonomy in order to protect us from their potentially destructive power. It is often assumed that to do that, we should program AI with the true moral theory (whatever that might be), much as we teach morality to our children. This paper argues that the focus on AI with human-level autonomy is misguided. The robots and AI that we have now and in the near future are "semi-autonomous" in that their ability to make choices and to act is limited across a number of dimensions. Further, it may be morally problematic to create AI with human-level autonomy, even if it becomes possible. As such, any useful approach to AI ethics should begin with a theory of giving ethics to semi-autonomous agents (SAAs). In this paper, we work toward such a theory by evaluating our obligations to and for "natural" SAAs, including nonhuman animals and humans with developing and diminished capacities. Drawing on research in neuroscience, bioethics, and philosophy, we identify the ways in which AI semi-autonomy differs from semi-autonomy in humans and nonhuman animals. We conclude on the basis of these comparisons that when giving ethics to SAAs, we should focus on principles and restrictions that protect human interests, but that we can only permissibly maintain this approach so long as we do not aim at developing technology with human-level autonomy.

摘要

机器人与人工智能(AI)的伦理通常聚焦于赋予尚未存在的、具备人类水平自主性的人工智能“伦理道德”,以保护我们免受其潜在破坏力的影响。人们通常认为,要做到这一点,我们应该用真正的道德理论(无论它是什么)来对人工智能进行编程,就像我们向孩子传授道德一样。本文认为,将重点放在具备人类水平自主性的人工智能上是错误的。我们现在以及在不久的将来所拥有的机器人和人工智能是“半自主的”,因为它们在多个维度上做出选择和行动的能力是有限的。此外,创造具备人类水平自主性的人工智能在道德上可能存在问题,即便这变得可行。因此,任何有关人工智能伦理的有用方法都应以一种向半自主主体(SAA)赋予伦理道德的理论为起点。在本文中,我们通过评估我们对“自然”SAA(包括非人类动物以及能力正在发展和衰退的人类)的义务和为其考虑的因素,来努力构建这样一种理论。借鉴神经科学、生物伦理学和哲学方面的研究,我们确定了人工智能的半自主性与人类和非人类动物的半自主性的不同之处。基于这些比较,我们得出结论,在向SAA赋予伦理道德时,我们应该关注保护人类利益的原则和限制,但只有在我们不旨在开发具备人类水平自主性的技术的情况下,我们才可以允许维持这种方法。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验