Suppr超能文献

公平分类领域适应:一种双对抗学习方法。

Fair classification domain adaptation: A dual adversarial learning approach.

作者信息

Liang Yueqing, Chen Canyu, Tian Tian, Shu Kai

机构信息

Department of Computer Science, Illinois Institute of Technology, Chicago, IL, United States.

Stuart School of Business, Illinois Institute of Technology, Chicago, IL, United States.

出版信息

Front Big Data. 2023 Jan 4;5:1049565. doi: 10.3389/fdata.2022.1049565. eCollection 2022.

Abstract

Modern machine learning (ML) models are becoming increasingly popular and are widely used in decision-making systems. However, studies have shown critical issues of ML discrimination and unfairness, which hinder their adoption on high-stake applications. Recent research on fair classifiers has drawn significant attention to developing effective algorithms to achieve fairness and good classification performance. Despite the great success of these fairness-aware machine learning models, most of the existing models require sensitive attributes to pre-process the data, regularize the model learning or post-process the prediction to have fair predictions. However, sensitive attributes are often incomplete or even unavailable due to privacy, legal or regulation restrictions. Though we lack the sensitive attribute for training a fair model in the target domain, there might exist a similar domain that has sensitive attributes. Thus, it is important to exploit auxiliary information from a similar domain to help improve fair classification in the target domain. Therefore, in this paper, we study a novel problem of exploring domain adaptation for fair classification. We propose a new framework that can learn to adapt the sensitive attributes from a source domain for fair classification in the target domain. Extensive experiments on real-world datasets illustrate the effectiveness of the proposed model for fair classification, even when no sensitive attributes are available in the target domain.

摘要

现代机器学习(ML)模型越来越受欢迎,并广泛应用于决策系统。然而,研究表明机器学习存在歧视和不公平等关键问题,这阻碍了它们在高风险应用中的采用。最近关于公平分类器的研究引起了人们对开发有效算法以实现公平性和良好分类性能的极大关注。尽管这些公平感知机器学习模型取得了巨大成功,但大多数现有模型需要敏感属性来预处理数据、规范模型学习或后处理预测以获得公平预测。然而,由于隐私、法律或法规限制,敏感属性往往不完整甚至不可用。尽管在目标领域缺乏用于训练公平模型的敏感属性,但可能存在一个具有敏感属性的相似领域。因此,利用来自相似领域的辅助信息来帮助改善目标领域的公平分类非常重要。因此,在本文中,我们研究了一个探索公平分类领域适应的新问题。我们提出了一个新框架,该框架可以学习从源域适应敏感属性,以便在目标域中进行公平分类。在真实世界数据集上的大量实验表明,即使目标域中没有敏感属性,所提出的模型对于公平分类也是有效的。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c644/9848304/552a935bbbc1/fdata-05-1049565-g0001.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验