Digital Ethics Lab, Oxford Internet Institute, University of Oxford, Oxford, UK.
The Alan Turing Institute, London, UK.
Sci Eng Ethics. 2020 Jun;26(3):1771-1796. doi: 10.1007/s11948-020-00213-5. Epub 2020 Apr 3.
The idea of artificial intelligence for social good (henceforth AI4SG) is gaining traction within information societies in general and the AI community in particular. It has the potential to tackle social problems through the development of AI-based solutions. Yet, to date, there is only limited understanding of what makes AI socially good in theory, what counts as AI4SG in practice, and how to reproduce its initial successes in terms of policies. This article addresses this gap by identifying seven ethical factors that are essential for future AI4SG initiatives. The analysis is supported by 27 case examples of AI4SG projects. Some of these factors are almost entirely novel to AI, while the significance of other factors is heightened by the use of AI. From each of these factors, corresponding best practices are formulated which, subject to context and balance, may serve as preliminary guidelines to ensure that well-designed AI is more likely to serve the social good.
人工智能促进社会公益(简称 AI4SG)的理念在信息社会中得到了广泛关注,尤其是在人工智能社区中。它有潜力通过开发基于人工智能的解决方案来解决社会问题。然而,迄今为止,人们对理论上什么样的人工智能才是有益社会的、实践中什么样的人工智能才是 AI4SG 以及如何在政策层面上复制其最初的成功,只有有限的理解。本文通过确定未来 AI4SG 计划所必需的七个伦理因素来填补这一空白。这项分析得到了 27 个 AI4SG 项目案例的支持。其中一些因素几乎完全是人工智能所特有的,而其他因素的重要性则因人工智能的使用而提高。从这些因素中的每一个因素,都制定了相应的最佳实践,这些实践在考虑到背景和平衡的情况下,可以作为初步的指导方针,以确保设计良好的人工智能更有可能服务于社会公益。