Department of Philosophy and Social Critique, Faculty of Political Science and Diplomacy, Vytautas Magnus University, V. Putvinskio g. 23 (R 403), 44243, Kaunas, Lithuania.
Research Cluster for Applied Ethics, Faculty of Law, Vytautas Magnus University, V. Putvinskio g. 23 (R 403), 44243, Kaunas, Lithuania.
Sci Eng Ethics. 2020 Feb;26(1):141-157. doi: 10.1007/s11948-019-00084-5. Epub 2019 Jan 30.
This paper examines the ethical pitfalls and challenges that non-ethicists, such as researchers and programmers in the fields of computer science, artificial intelligence and robotics, face when building moral machines. Whether ethics is "computable" depends on how programmers understand ethics in the first place and on the adequacy of their understanding of the ethical problems and methodological challenges in these fields. Researchers and programmers face at least two types of problems due to their general lack of ethical knowledge or expertise. The first type is so-called rookie mistakes, which could be addressed by providing these people with the necessary ethical knowledge. The second, more difficult methodological issue concerns areas of peer disagreement in ethics, where no easy solutions are currently available. This paper examines several existing approaches to highlight the ethical pitfalls and challenges involved. Familiarity with these and similar problems can help programmers to avoid pitfalls and build better moral machines. The paper concludes that ethical decisions regarding moral robots should be based on avoiding what is immoral (i.e. prohibiting certain immoral actions) in combination with a pluralistic ethical method of solving moral problems, rather than relying on a particular ethical approach, so as to avoid a normative bias.
本文探讨了非伦理学家(如计算机科学、人工智能和机器人学领域的研究人员和程序员)在构建道德机器时所面临的伦理陷阱和挑战。伦理是否“可计算”取决于程序员最初如何理解伦理,以及他们对这些领域中的伦理问题和方法学挑战的理解是否充分。由于缺乏一般的伦理知识或专业知识,研究人员和程序员至少面临两种类型的问题。第一种是所谓的新手错误,可以通过向这些人提供必要的伦理知识来解决。第二个更困难的方法论问题涉及到伦理学中的同行分歧领域,目前还没有简单的解决方案。本文探讨了几种现有的方法,以突出所涉及的伦理陷阱和挑战。熟悉这些和类似的问题可以帮助程序员避免陷阱并构建更好的道德机器。本文的结论是,关于道德机器人的伦理决策应该基于避免不道德的行为(即禁止某些不道德的行为),并结合解决道德问题的多元伦理方法,而不是依赖于特定的伦理方法,以避免规范性偏见。