Villegas-Galaviz Carolina, Martin Kirsten
Technology Ethics Center, University of Notre Dame, 204 O'Shaughnessy Hall, Notre Dame, IN 46556 USA.
IT, Analytics, and Operations, University of Notre Dame, South Bend, IN USA.
AI Soc. 2023 Mar 23:1-12. doi: 10.1007/s00146-023-01642-z.
This paper investigates how the introduction of AI to decision making increases moral distance and recommends the ethics of care to augment the ethical examination of AI decision making. With AI decision making, face-to-face interactions are minimized, and decisions are part of a more opaque process that humans do not always understand. Within decision-making research, the concept of moral distance is used to explain why individuals behave unethically towards those who are not seen. Moral distance abstracts those who are impacted by the decision and leads to less ethical decisions. The goal of this paper is to identify and analyze the moral distance created by AI through both proximity distance (in space, time, and culture) and bureaucratic distance (derived from hierarchy, complex processes, and principlism). We then propose the ethics of care as a moral framework to analyze the moral implications of AI. The ethics of care brings to the forefront circumstances and context, interdependence, and vulnerability in analyzing algorithmic decision making.
本文探讨了将人工智能引入决策过程如何增加道德距离,并推荐关怀伦理学以加强对人工智能决策的伦理审查。在人工智能决策中,面对面的互动被降至最低,决策是一个人类并不总是理解的更不透明过程的一部分。在决策研究中,道德距离的概念被用来解释为什么个体对那些未被看到的人做出不道德行为。道德距离将受决策影响的人抽象化,从而导致不那么符合伦理的决策。本文的目标是通过接近距离(在空间、时间和文化方面)和官僚距离(源于等级制度、复杂过程和原则主义)来识别和分析由人工智能产生的道德距离。然后,我们提出关怀伦理学作为一个道德框架来分析人工智能的道德影响。关怀伦理学在分析算法决策时将情境和背景、相互依存性以及脆弱性置于突出位置。