Martinez Natalia, Bertran Martin, Sapiro Guillermo
Department of Electircal and Computer Engineering, Duke University.
Proc Mach Learn Res. 2020 Jul;119:6755-6764.
In this work we formulate and formally characterize group fairness as a multi-objective optimization problem, where each sensitive group risk is a separate objective. We propose a fairness criterion where a classifier achieves minimax risk and is Pareto-efficient w.r.t. all groups, avoiding unnecessary harm, and can lead to the best zero-gap model if policy dictates so. We provide a simple optimization algorithm compatible with deep neural networks to satisfy these constraints. Since our method does not require test-time access to sensitive attributes, it can be applied to reduce worst-case classification errors between outcomes in unbalanced classification problems. We test the proposed methodology on real case-studies of predicting income, ICU patient mortality, skin lesions classification, and assessing credit risk, demonstrating how our framework compares favorably to other approaches.
在这项工作中,我们将群体公平性表述并正式刻画为一个多目标优化问题,其中每个敏感群体风险都是一个单独的目标。我们提出了一种公平性准则,即分类器实现最小最大风险,并且相对于所有群体都是帕累托有效的,避免不必要的伤害,并且如果政策有要求,还可以导致最佳的零差距模型。我们提供了一种与深度神经网络兼容的简单优化算法来满足这些约束条件。由于我们的方法不需要在测试时访问敏感属性,因此它可以应用于减少不平衡分类问题中不同结果之间的最坏情况分类错误。我们在预测收入、重症监护病房患者死亡率、皮肤病变分类以及评估信用风险的实际案例研究中测试了所提出的方法,展示了我们的框架与其他方法相比的优势。