Turku School of Economics, Information Systems Sciences, University of Turku, Turku, Finland.
Sci Eng Ethics. 2024 Oct 9;30(5):46. doi: 10.1007/s11948-024-00507-y.
The popularisation of Artificial Intelligence (AI) technologies has sparked discussion about their ethical implications. This development has forced governmental organisations, NGOs, and private companies to react and draft ethics guidelines for future development of ethical AI systems. Whereas many ethics guidelines address values familiar to ethicists, they seem to lack in ethical justifications. Furthermore, most tend to neglect the impact of AI on democracy, governance, and public deliberation. Existing research suggest, however, that AI can threaten key elements of western democracies that are ethically relevant. In this paper, Rawls's theory of justice is applied to draft a set of guidelines for organisations and policy-makers to guide AI development towards a more ethical direction. The goal is to contribute to the broadening of the discussion on AI ethics by exploring the possibility of constructing AI ethics guidelines that are philosophically justified and take a broader perspective of societal justice. The paper discusses how Rawls's theory of justice as fairness and its key concepts relate to the ongoing developments in AI ethics and gives a proposition of how principles that offer a foundation for operationalising AI ethics in practice could look like if aligned with Rawls's theory of justice as fairness.
人工智能(AI)技术的普及引发了人们对其伦理影响的讨论。这一发展迫使政府组织、非政府组织和私营公司做出反应,为未来伦理人工智能系统的发展起草道德准则。虽然许多道德准则都涉及到伦理学家所熟悉的价值观,但它们似乎缺乏道德上的理由。此外,大多数准则往往忽视了人工智能对民主、治理和公共审议的影响。然而,现有研究表明,人工智能可能会威胁到与伦理相关的西方民主的关键要素。在本文中,罗尔斯的正义论被应用于为组织和政策制定者起草一套指导方针,以引导人工智能朝着更加道德的方向发展。目标是通过探索构建在哲学上有理由、更广泛地关注社会正义的人工智能道德准则的可能性,为人工智能伦理的讨论提供帮助。本文讨论了罗尔斯的公平正义理论及其核心概念如何与人工智能伦理的最新发展相关联,并提出了一个建议,即如果将其与罗尔斯的公平正义理论相一致,那么为实践中的人工智能伦理提供操作基础的原则可能会是什么样子。