Arnold David, Dobbie Will, Hull Peter
University of California, San Diego and NBER.
Harvard Kennedy School and NBER.
Am Econ Rev Insights. 2025 Jun;7(2):231-249. doi: 10.1257/aeri.20240249.
We develop new quasi-experimental tools to understand algorithmic discrimination and build non-discriminatory algorithms when the outcome of interest is only selectively observed. We first show that algorithmic discrimination arises when the available algorithmic inputs are systematically different for individuals with the same objective potential outcomes. We then show how algorithmic discrimination can be eliminated by measuring and purging these conditional input disparities. Leveraging the quasi-random assignment of bail judges in New York City, we find that our new algorithms not only eliminate algorithmic discrimination but also generate more accurate predictions by correcting for the selective observability of misconduct outcomes.
我们开发了新的准实验工具,以了解算法歧视,并在仅选择性观察到感兴趣的结果时构建无歧视算法。我们首先表明,当具有相同客观潜在结果的个体的可用算法输入存在系统差异时,就会出现算法歧视。然后,我们展示了如何通过测量和消除这些条件输入差异来消除算法歧视。利用纽约市保释法官的准随机分配,我们发现我们的新算法不仅消除了算法歧视,还通过纠正不当行为结果的选择性可观察性生成了更准确的预测。