Dorton Stephen L, Ministero Lauren M, Alaybek Balca, Bryant Douglas J
Social & Behavioral Sciences Department, The MITRE Corporation, Bedford, MA, United States.
School of Public Policy, University of Maryland, College Park, MD, United States.
Front Artif Intell. 2023 Jul 20;6:1143907. doi: 10.3389/frai.2023.1143907. eCollection 2023.
There is growing expectation that artificial intelligence (AI) developers foresee and mitigate harms that might result from their creations; however, this is exceptionally difficult given the prevalence of emergent behaviors that occur when integrating AI into complex sociotechnical systems. We argue that Naturalistic Decision Making (NDM) principles, models, and tools are well-suited to tackling this challenge. Already applied in high-consequence domains, NDM tools such as the premortem, and others, have been shown to uncover a set of risks of underlying factors that would lead to ethical harms. Such NDM tools have already been used to develop AI that is more trustworthy and resilient, and can help avoid unintended consequences of AI built with noble intentions. We present predictive policing algorithms as a use case, highlighting various factors that led to ethical harms and how NDM tools could help foresee and mitigate such harms.
人们越来越期望人工智能(AI)开发者能够预见并减轻其创造物可能带来的危害;然而,鉴于将人工智能集成到复杂的社会技术系统中时出现的新兴行为普遍存在,这极具挑战性。我们认为,自然决策(NDM)原则、模型和工具非常适合应对这一挑战。自然决策工具,如事前剖析法等,已应用于高风险领域,能够揭示一系列可能导致伦理危害的潜在因素风险。此类自然决策工具已被用于开发更值得信赖且更具弹性的人工智能,有助于避免出于善意构建的人工智能产生意外后果。我们将预测性警务算法作为一个案例,强调导致伦理危害的各种因素,以及自然决策工具如何帮助预见和减轻此类危害。