Institute of European and American Studies, Academia Sinica, No. 128, Sec. 2, Academia Rd., Nankang District, Taipei, 115, Taiwan.
Sci Eng Ethics. 2021 Jun 1;27(3):36. doi: 10.1007/s11948-021-00312-x.
Whereas using artificial intelligence (AI) to predict natural hazards is promising, applying a predictive policing algorithm (PPA) to predict human threats to others continues to be debated. Whereas PPAs were reported to be initially successful in Germany and Japan, the killing of Black Americans by police in the US has sparked a call to dismantle AI in law enforcement. However, although PPAs may statistically associate suspects with economically disadvantaged classes and ethnic minorities, the targeted groups they aim to protect are often vulnerable populations as well (e.g., victims of human trafficking, kidnapping, domestic violence, or drug abuse). Thus, determining how to enhance the benefits of PPA while reducing bias through better management is important. In this paper, we propose a policy schema to address this issue. First, after clarifying relevant concepts, we examine major criticisms of PPAs and argue that some of them should be addressed. If banning AI or making it taboo is an unrealistic solution, we must learn from our errors to improve AI. We next identify additional challenges of PPAs and offer recommendations from a policy viewpoint. We conclude that the employment of PPAs should be merged into broader governance of the social safety net and audited publicly by parliament and civic society so that the unjust social structure that breeds bias can be revised.
虽然使用人工智能 (AI) 来预测自然灾害很有前景,但将预测警务算法 (PPA) 应用于预测对他人的人为威胁仍存在争议。虽然有报道称 PPA 在德国和日本最初取得了成功,但美国警察枪杀非裔美国人的事件引发了呼吁拆除执法中的人工智能。然而,尽管 PPA 可能在统计学上可以将嫌疑人与经济弱势群体和少数族裔联系起来,但他们旨在保护的目标群体通常也是弱势群体(例如,人口贩卖、绑架、家庭暴力或药物滥用的受害者)。因此,确定如何通过更好的管理来提高 PPA 的效益并减少偏见是很重要的。在本文中,我们提出了一个政策方案来解决这个问题。首先,在澄清相关概念后,我们考察了对 PPA 的主要批评,并认为其中一些批评应该得到解决。如果禁止人工智能或使其成为禁忌是不现实的解决方案,我们必须从错误中吸取教训,改进人工智能。我们接下来从政策角度确定了 PPA 的其他挑战,并提出了建议。我们的结论是,PPA 的使用应该纳入更广泛的社会安全网治理,并由议会和公民社会进行公开审计,以便修改产生偏见的不公平社会结构。