• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

公平分类领域适应:一种双对抗学习方法。

Fair classification domain adaptation: A dual adversarial learning approach.

作者信息

Liang Yueqing, Chen Canyu, Tian Tian, Shu Kai

机构信息

Department of Computer Science, Illinois Institute of Technology, Chicago, IL, United States.

Stuart School of Business, Illinois Institute of Technology, Chicago, IL, United States.

出版信息

Front Big Data. 2023 Jan 4;5:1049565. doi: 10.3389/fdata.2022.1049565. eCollection 2022.

DOI:10.3389/fdata.2022.1049565
PMID:36687771
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC9848304/
Abstract

Modern machine learning (ML) models are becoming increasingly popular and are widely used in decision-making systems. However, studies have shown critical issues of ML discrimination and unfairness, which hinder their adoption on high-stake applications. Recent research on fair classifiers has drawn significant attention to developing effective algorithms to achieve fairness and good classification performance. Despite the great success of these fairness-aware machine learning models, most of the existing models require sensitive attributes to pre-process the data, regularize the model learning or post-process the prediction to have fair predictions. However, sensitive attributes are often incomplete or even unavailable due to privacy, legal or regulation restrictions. Though we lack the sensitive attribute for training a fair model in the target domain, there might exist a similar domain that has sensitive attributes. Thus, it is important to exploit auxiliary information from a similar domain to help improve fair classification in the target domain. Therefore, in this paper, we study a novel problem of exploring domain adaptation for fair classification. We propose a new framework that can learn to adapt the sensitive attributes from a source domain for fair classification in the target domain. Extensive experiments on real-world datasets illustrate the effectiveness of the proposed model for fair classification, even when no sensitive attributes are available in the target domain.

摘要

现代机器学习(ML)模型越来越受欢迎,并广泛应用于决策系统。然而,研究表明机器学习存在歧视和不公平等关键问题,这阻碍了它们在高风险应用中的采用。最近关于公平分类器的研究引起了人们对开发有效算法以实现公平性和良好分类性能的极大关注。尽管这些公平感知机器学习模型取得了巨大成功,但大多数现有模型需要敏感属性来预处理数据、规范模型学习或后处理预测以获得公平预测。然而,由于隐私、法律或法规限制,敏感属性往往不完整甚至不可用。尽管在目标领域缺乏用于训练公平模型的敏感属性,但可能存在一个具有敏感属性的相似领域。因此,利用来自相似领域的辅助信息来帮助改善目标领域的公平分类非常重要。因此,在本文中,我们研究了一个探索公平分类领域适应的新问题。我们提出了一个新框架,该框架可以学习从源域适应敏感属性,以便在目标域中进行公平分类。在真实世界数据集上的大量实验表明,即使目标域中没有敏感属性,所提出的模型对于公平分类也是有效的。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c644/9848304/51e2fb6d4d66/fdata-05-1049565-g0003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c644/9848304/552a935bbbc1/fdata-05-1049565-g0001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c644/9848304/ca2b30f8fd61/fdata-05-1049565-g0002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c644/9848304/51e2fb6d4d66/fdata-05-1049565-g0003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c644/9848304/552a935bbbc1/fdata-05-1049565-g0001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c644/9848304/ca2b30f8fd61/fdata-05-1049565-g0002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c644/9848304/51e2fb6d4d66/fdata-05-1049565-g0003.jpg

相似文献

1
Fair classification domain adaptation: A dual adversarial learning approach.公平分类领域适应:一种双对抗学习方法。
Front Big Data. 2023 Jan 4;5:1049565. doi: 10.3389/fdata.2022.1049565. eCollection 2022.
2
Learning Fair Representations via Distance Correlation Minimization.通过最小化距离相关性学习公平表示。
IEEE Trans Neural Netw Learn Syst. 2024 Feb;35(2):2139-2152. doi: 10.1109/TNNLS.2022.3187165. Epub 2024 Feb 5.
3
MI-DABAN: A dual-attention-based adversarial network for motor imagery classification.MI-DABAN:一种用于运动想象分类的基于双重注意力的对抗网络。
Comput Biol Med. 2023 Jan;152:106420. doi: 10.1016/j.compbiomed.2022.106420. Epub 2022 Dec 13.
4
Federated Adversarial Debiasing for Fair and Transferable Representations.用于公平且可转移表示的联邦对抗去偏
KDD. 2021 Aug;2021:617-627. doi: 10.1145/3447548.3467281. Epub 2021 Aug 14.
5
Disentangled contrastive learning for fair graph representations.用于公平图表示的解缠对比学习
Neural Netw. 2025 Jan;181:106781. doi: 10.1016/j.neunet.2024.106781. Epub 2024 Oct 5.
6
Cross-Domain Facial Expression Recognition: A Unified Evaluation Benchmark and Adversarial Graph Learning.跨领域面部表情识别:统一的评估基准与对抗图学习。
IEEE Trans Pattern Anal Mach Intell. 2022 Dec;44(12):9887-9903. doi: 10.1109/TPAMI.2021.3131222. Epub 2022 Nov 7.
7
Migrate demographic group for fair Graph Neural Networks.迁移人口统计群体以实现公平的图神经网络。
Neural Netw. 2024 Jul;175:106264. doi: 10.1016/j.neunet.2024.106264. Epub 2024 Mar 23.
8
Achieve fairness without demographics for dermatological disease diagnosis.在不考虑人口统计学因素的情况下实现皮肤病诊断的公平性。
Med Image Anal. 2024 Jul;95:103188. doi: 10.1016/j.media.2024.103188. Epub 2024 May 3.
9
Towards Fair Knowledge Transfer for Imbalanced Domain Adaptation.迈向不均衡域适应的公平知识转移
IEEE Trans Image Process. 2021;30:8200-8211. doi: 10.1109/TIP.2021.3113576. Epub 2021 Sep 28.
10
A multicenter random forest model for effective prognosis prediction in collaborative clinical research network.多中心随机森林模型在协作临床研究网络中的有效预后预测。
Artif Intell Med. 2020 Mar;103:101814. doi: 10.1016/j.artmed.2020.101814. Epub 2020 Feb 5.

本文引用的文献

1
On Target Shift in Adversarial Domain Adaptation.对抗域适应中的目标偏移
Proc Mach Learn Res. 2019 Apr;89:616-625.