Wang Haotao, Hong Junyuan, Zhou Jiayu, Wang Zhangyang
University of Texas at Austin.
Michigan State University.
Transact Mach Learn Res. 2023;2023. Epub 2023 Mar 13.
Increasing concerns have been raised on deep learning fairness in recent years. Existing fairness-aware machine learning methods mainly focus on the fairness of in-distribution data. However, in real-world applications, it is common to have distribution shift between the training and test data. In this paper, we first show that the fairness achieved by existing methods can be easily broken by slight distribution shifts. To solve this problem, we propose a novel fairness learning method termed CUrvature MAtching (CUMA), which can achieve robust fairness generalizable to unseen domains with unknown distributional shifts. Specifically, CUMA enforces the model to have similar generalization ability on the majority and minority groups, by matching the loss curvature distributions of the two groups. We evaluate our method on three popular fairness datasets. Compared with existing methods, CUMA achieves superior fairness under unseen distribution shifts, without sacrificing either the overall accuracy or the in-distribution fairness.
近年来,人们对深度学习的公平性越来越关注。现有的公平感知机器学习方法主要关注分布内数据的公平性。然而,在实际应用中,训练数据和测试数据之间存在分布偏移是很常见的。在本文中,我们首先表明,现有方法所实现的公平性很容易被轻微的分布偏移打破。为了解决这个问题,我们提出了一种新颖的公平学习方法,称为曲率匹配(CUMA),它可以实现对具有未知分布偏移的未见领域具有鲁棒性的公平性。具体来说,CUMA通过匹配两组的损失曲率分布,强制模型在多数群体和少数群体上具有相似的泛化能力。我们在三个流行的公平性数据集上评估了我们的方法。与现有方法相比,CUMA在未见分布偏移的情况下实现了卓越的公平性,同时不牺牲整体准确性或分布内公平性。