Suppr超能文献

学习最优公平策略。

Learning Optimal Fair Policies.

作者信息

Nabi Razieh, Malinsky Daniel, Shpitser Ilya

机构信息

Department of Computer Science, Johns Hopkins University, Baltimore, MD, USA.

出版信息

Proc Mach Learn Res. 2019 Jun;97:4674-4682.

Abstract

Systematic discriminatory biases present in our society influence the way data is collected and stored, the way variables are defined, and the way scientific findings are put into practice as policy. Automated decision procedures and learning algorithms applied to such data may serve to perpetuate existing injustice or unfairness in our society. In this paper, we consider how to make optimal but fair decisions, which "break the cycle of injustice" by correcting for the unfair dependence of both decisions and outcomes on sensitive features (e.g., variables that correspond to gender, race, disability, or other protected attributes). We use methods from causal inference and constrained optimization to learn optimal policies in a way that addresses multiple potential biases which afflict data analysis in sensitive contexts, extending the approach of Nabi & Shpitser (2018). Our proposal comes equipped with the theoretical guarantee that the chosen fair policy will induce a joint distribution for new instances that satisfies given fairness constraints. We illustrate our approach with both synthetic data and real criminal justice data.

摘要

我们社会中存在的系统性歧视偏见会影响数据的收集和存储方式、变量的定义方式以及科学研究结果作为政策付诸实践的方式。应用于此类数据的自动化决策程序和学习算法可能会使我们社会中现有的不公正或不公平现象长期存在。在本文中,我们考虑如何做出最优且公平的决策,通过纠正决策和结果对敏感特征(例如与性别、种族、残疾或其他受保护属性相对应的变量)的不公平依赖来“打破不公正的循环”。我们使用因果推断和约束优化方法,以一种解决在敏感背景下困扰数据分析的多种潜在偏差的方式来学习最优政策,扩展了纳比和什皮策(2018年)的方法。我们的提议具有理论保证,即所选择的公平政策将为新实例诱导出满足给定公平约束的联合分布。我们用合成数据和真实刑事司法数据说明了我们的方法。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8ad6/6935348/61ec7dd1aa72/nihms-1063774-f0001.jpg

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验