• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

具有未观察到的受保护属性的间接知识的干预公平性。

Interventional Fairness with Indirect Knowledge of Unobserved Protected Attributes.

作者信息

Galhotra Sainyam, Shanmugam Karthikeyan, Sattigeri Prasanna, Varshney Kush R

机构信息

Department of Computer Science, University of Chicago, Chicago, IL 60637, USA.

IBM Research, Yorktown Heights, NY 10598, USA.

出版信息

Entropy (Basel). 2021 Nov 25;23(12):1571. doi: 10.3390/e23121571.

DOI:10.3390/e23121571
PMID:34945877
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC8699829/
Abstract

The deployment of machine learning (ML) systems in applications with societal impact has motivated the study of fairness for marginalized groups. Often, the protected attribute is absent from the training dataset for legal reasons. However, datasets still contain proxy attributes that capture protected information and can inject unfairness in the ML model. Some deployed systems allow auditors, decision makers, or affected users to report issues or seek recourse by flagging individual samples. In this work, we examine such systems and consider a feedback-based framework where the protected attribute is unavailable and the flagged samples are indirect knowledge. The reported samples are used as guidance to identify the proxy attributes that are causally dependent on the (unknown) protected attribute. We work under the causal interventional fairness paradigm. Without requiring the underlying structural causal model a priori, we propose an approach that performs conditional independence tests on observed data to identify such proxy attributes. We theoretically prove the optimality of our algorithm, bound its complexity, and complement it with an empirical evaluation demonstrating its efficacy on various real-world and synthetic datasets.

摘要

机器学习(ML)系统在具有社会影响的应用中的部署激发了对边缘化群体公平性的研究。通常,由于法律原因,训练数据集中不存在受保护属性。然而,数据集仍然包含捕获受保护信息的代理属性,并且可能在ML模型中引入不公平性。一些已部署的系统允许审计人员、决策者或受影响的用户通过标记单个样本报告问题或寻求补救。在这项工作中,我们研究此类系统,并考虑一个基于反馈的框架,其中受保护属性不可用,标记的样本是间接知识。报告的样本用作指导,以识别因果依赖于(未知)受保护属性的代理属性。我们在因果干预公平范式下工作。在无需先验底层结构因果模型的情况下,我们提出一种方法,对观测数据执行条件独立性测试以识别此类代理属性。我们从理论上证明了算法的最优性,界定了其复杂度,并用实证评估对其进行补充,证明其在各种真实世界和合成数据集上的有效性。

相似文献

1
Interventional Fairness with Indirect Knowledge of Unobserved Protected Attributes.具有未观察到的受保护属性的间接知识的干预公平性。
Entropy (Basel). 2021 Nov 25;23(12):1571. doi: 10.3390/e23121571.
2
A novel approach for assessing fairness in deployed machine learning algorithms.一种评估已部署机器学习算法公平性的新方法。
Sci Rep. 2024 Aug 1;14(1):17753. doi: 10.1038/s41598-024-68651-w.
3
Differential Fairness: An Intersectional Framework for Fair AI.差异公平性:公平人工智能的交叉性框架。
Entropy (Basel). 2023 Apr 14;25(4):660. doi: 10.3390/e25040660.
4
Fair classification domain adaptation: A dual adversarial learning approach.公平分类领域适应:一种双对抗学习方法。
Front Big Data. 2023 Jan 4;5:1049565. doi: 10.3389/fdata.2022.1049565. eCollection 2022.
5
Muffin: A Framework Toward Multi-Dimension AI Fairness by Uniting Off-the-Shelf Models.松饼:通过整合现成模型实现多维度人工智能公平性的框架。
Proc Des Autom Conf. 2023 Jul;2023. doi: 10.1109/dac56929.2023.10247765. Epub 2023 Sep 15.
6
Enhancing fairness in AI-enabled medical systems with the attribute neutral framework.利用属性中立框架增强人工智能医疗系统的公平性。
Nat Commun. 2024 Oct 10;15(1):8767. doi: 10.1038/s41467-024-52930-1.
7
Learning Fair Representations via Distance Correlation Minimization.通过最小化距离相关性学习公平表示。
IEEE Trans Neural Netw Learn Syst. 2024 Feb;35(2):2139-2152. doi: 10.1109/TNNLS.2022.3187165. Epub 2024 Feb 5.
8
Bipartite Ranking Fairness Through a Model Agnostic Ordering Adjustment.通过模型无关排序调整实现二分排序公平性
IEEE Trans Pattern Anal Mach Intell. 2023 Nov;45(11):13235-13249. doi: 10.1109/TPAMI.2023.3290949.
9
Fairness-aware recommendation with meta learning.基于元学习的公平感知推荐
Sci Rep. 2024 May 2;14(1):10125. doi: 10.1038/s41598-024-60808-x.
10
Bias Analysis in Healthcare Time Series (BAHT) Decision Support Systems from Meta Data.基于元数据的医疗保健时间序列偏差分析(BAHT)决策支持系统
J Healthc Inform Res. 2023 Jun 19;7(2):225-253. doi: 10.1007/s41666-023-00133-6. eCollection 2023 Jun.

引用本文的文献

1
Causal Inference for Heterogeneous Data and Information Theory.异构数据的因果推断与信息论
Entropy (Basel). 2023 Jun 8;25(6):910. doi: 10.3390/e25060910.

本文引用的文献

1
Fair Inference on Outcomes.对结果的合理推断。
Proc AAAI Conf Artif Intell. 2018 Feb;2018:1931-1940. Epub 2018 Apr 25.
2
Sparse inverse covariance estimation with the graphical lasso.使用图模型选择法进行稀疏逆协方差估计。
Biostatistics. 2008 Jul;9(3):432-41. doi: 10.1093/biostatistics/kxm045. Epub 2007 Dec 12.