Schools of Social Welfare and Public Policy, University of California, Berkeley, CA, USA.
Administrative Office, U.S. Courts, Washington DC, USA.
Behav Sci Law. 2020 May;38(3):259-278. doi: 10.1002/bsl.2465. Epub 2020 May 5.
Although risk assessment has increasingly been used as a tool to help reform the criminal justice system, some stakeholders are adamantly opposed to using algorithms. The principal concern is that any benefits achieved by safely reducing rates of incarceration will be offset by costs to racial justice claimed to be inherent in the algorithms themselves. But fairness trade-offs are inherent to the task of predicting recidivism, whether the prediction is made by an algorithm or human. Based on a matched sample of 67,784 Black and White federal supervisees assessed with the Post Conviction Risk Assessment, we compared how three alternative strategies for "debiasing" algorithms affect these trade-offs, using arrest for a violent crime as the criterion . These candidate algorithms all strongly predict violent reoffending (areas under the curve = 0.71-72), but vary in their association with race (r = 0.00-0.21) and shift trade-offs between balance in positive predictive value and false-positive rates. Providing algorithms with access to race (rather than omitting race or "blinding" its effects) can maximize calibration and minimize imbalanced error rates. Implications for policymakers with value preferences for efficiency versus equity are discussed.
尽管风险评估越来越多地被用作帮助改革刑事司法系统的工具,但一些利益相关者坚决反对使用算法。主要的担忧是,通过安全降低监禁率所获得的任何好处,都将被算法本身固有的种族正义成本所抵消。但是,公平权衡是预测累犯的任务所固有的,无论是算法还是人类进行预测。基于对 67784 名黑人联邦监督者和白人联邦监督者进行的后定罪风险评估的匹配样本,我们比较了“去偏”算法的三种替代策略如何影响这些权衡,使用暴力犯罪被捕作为标准。这些候选算法都强烈预测暴力再犯罪(曲线下面积 = 0.71-0.72),但与种族的相关性不同(r = 0.00-0.21),并在阳性预测值和假阳性率之间的平衡与不平衡错误率之间转移权衡。为算法提供对种族的访问权限(而不是省略种族或“掩盖”其影响)可以最大限度地提高校准程度并最小化不平衡错误率。讨论了对政策制定者的影响,他们对效率与公平的价值偏好。