Suppr超能文献

你的公平性有多稳健?评估和维持在未见过的分布变化下的公平性。

How Robust is Your Fairness? Evaluating and Sustaining Fairness under Unseen Distribution Shifts.

作者信息

Wang Haotao, Hong Junyuan, Zhou Jiayu, Wang Zhangyang

机构信息

University of Texas at Austin.

Michigan State University.

出版信息

Transact Mach Learn Res. 2023;2023. Epub 2023 Mar 13.

Abstract

Increasing concerns have been raised on deep learning fairness in recent years. Existing fairness-aware machine learning methods mainly focus on the fairness of in-distribution data. However, in real-world applications, it is common to have distribution shift between the training and test data. In this paper, we first show that the fairness achieved by existing methods can be easily broken by slight distribution shifts. To solve this problem, we propose a novel fairness learning method termed CUrvature MAtching (CUMA), which can achieve robust fairness generalizable to unseen domains with unknown distributional shifts. Specifically, CUMA enforces the model to have similar generalization ability on the majority and minority groups, by matching the loss curvature distributions of the two groups. We evaluate our method on three popular fairness datasets. Compared with existing methods, CUMA achieves superior fairness under unseen distribution shifts, without sacrificing either the overall accuracy or the in-distribution fairness.

摘要

近年来,人们对深度学习的公平性越来越关注。现有的公平感知机器学习方法主要关注分布内数据的公平性。然而,在实际应用中,训练数据和测试数据之间存在分布偏移是很常见的。在本文中,我们首先表明,现有方法所实现的公平性很容易被轻微的分布偏移打破。为了解决这个问题,我们提出了一种新颖的公平学习方法,称为曲率匹配(CUMA),它可以实现对具有未知分布偏移的未见领域具有鲁棒性的公平性。具体来说,CUMA通过匹配两组的损失曲率分布,强制模型在多数群体和少数群体上具有相似的泛化能力。我们在三个流行的公平性数据集上评估了我们的方法。与现有方法相比,CUMA在未见分布偏移的情况下实现了卓越的公平性,同时不牺牲整体准确性或分布内公平性。

相似文献

3
Bipartite Ranking Fairness Through a Model Agnostic Ordering Adjustment.通过模型无关排序调整实现二分排序公平性
IEEE Trans Pattern Anal Mach Intell. 2023 Nov;45(11):13235-13249. doi: 10.1109/TPAMI.2023.3290949.
4
Domain Generalization in Biosignal Classification.生物信号分类中的领域泛化。
IEEE Trans Biomed Eng. 2021 Jun;68(6):1978-1989. doi: 10.1109/TBME.2020.3045720. Epub 2021 May 21.
6
Fairness-aware recommendation with meta learning.基于元学习的公平感知推荐
Sci Rep. 2024 May 2;14(1):10125. doi: 10.1038/s41598-024-60808-x.
10
Models and Mechanisms for Spatial Data Fairness.空间数据公平性的模型与机制
Proceedings VLDB Endowment. 2022 Oct;16(2):167-179. doi: 10.14778/3565816.3565820. Epub 2022 Oct 1.

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验