• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

松饼:通过整合现成模型实现多维度人工智能公平性的框架。

Muffin: A Framework Toward Multi-Dimension AI Fairness by Uniting Off-the-Shelf Models.

作者信息

Sheng Yi, Yang Junhuan, Yang Lei, Shi Yiyu, Hu Jingtong, Jiang Weiwen

机构信息

George Mason University.

University of Notre Dame.

出版信息

Proc Des Autom Conf. 2023 Jul;2023. doi: 10.1109/dac56929.2023.10247765. Epub 2023 Sep 15.

DOI:10.1109/dac56929.2023.10247765
PMID:38567296
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC10987014/
Abstract

Model fairness (a.k.a., bias) has become one of the most critical problems in a wide range of AI applications. An unfair model in autonomous driving may cause a traffic accident if corner cases (e.g., extreme weather) cannot be fairly regarded; or it will incur healthcare disparities if the AI model misdiagnoses a certain group of people (e.g., brown and black skin). In recent years, there are emerging research works on addressing unfairness, and they mainly focus on a single unfair attribute, like skin tone; however, real-world data commonly have multiple attributes, among which unfairness can exist in more than one attribute, called "multi-dimensional fairness". In this paper, we first reveal a strong correlation between the different unfair attributes, i.e., optimizing fairness on one attribute will lead to the collapse of others. Then, we propose a novel Multi-Dimension Fairness framework, namely which includes an automatic tool to unite off-the-shelf models to improve the fairness on multiple attributes simultaneously. Case studies on dermatology datasets with two unfair attributes show that the existing approach can achieve 21.05% fairness improvement on the first attribute while it makes the second attribute unfair by 1.85%. On the other hand, the proposed can unite multiple models to achieve simultaneously 26.32% and 20.37% fairness improvement on both attributes; meanwhile, it obtains 5.58% accuracy gain.

摘要

模型公平性(又称偏差)已成为广泛的人工智能应用中最关键的问题之一。如果自动驾驶中的模型不公平,在遇到特殊情况(如极端天气)时无法公平对待,可能会导致交通事故;或者,如果人工智能模型误诊某一特定群体(如棕色和黑色皮肤人群),就会造成医疗保健方面的差异。近年来,出现了一些关于解决不公平性的研究工作,它们主要关注单一的不公平属性,如肤色;然而,现实世界的数据通常具有多个属性,其中可能不止一个属性存在不公平性,即“多维度公平性”。在本文中,我们首先揭示了不同不公平属性之间的强相关性,即优化一个属性的公平性会导致其他属性的公平性崩溃。然后,我们提出了一种新颖的多维度公平性框架,即,它包括一个自动工具,用于整合现成的模型,以同时提高多个属性的公平性。对具有两个不公平属性的皮肤病学数据集的案例研究表明,现有方法在第一个属性上可实现21.05%的公平性提升,但会使第二个属性的不公平性增加1.85%。另一方面,所提出的框架可以整合多个模型,在两个属性上同时实现26.32%和20.37%的公平性提升;同时,它还能提高5.58%的准确率。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7ce5/10987014/2f6158abe0aa/nihms-1980020-f0009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7ce5/10987014/08d090e76bc8/nihms-1980020-f0001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7ce5/10987014/cba7aa0ce022/nihms-1980020-f0002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7ce5/10987014/dce26d97a851/nihms-1980020-f0003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7ce5/10987014/0b09cfa233df/nihms-1980020-f0004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7ce5/10987014/5694e912eb1e/nihms-1980020-f0005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7ce5/10987014/f06c935f12d8/nihms-1980020-f0006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7ce5/10987014/242b677dc090/nihms-1980020-f0007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7ce5/10987014/cf92df521792/nihms-1980020-f0008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7ce5/10987014/2f6158abe0aa/nihms-1980020-f0009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7ce5/10987014/08d090e76bc8/nihms-1980020-f0001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7ce5/10987014/cba7aa0ce022/nihms-1980020-f0002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7ce5/10987014/dce26d97a851/nihms-1980020-f0003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7ce5/10987014/0b09cfa233df/nihms-1980020-f0004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7ce5/10987014/5694e912eb1e/nihms-1980020-f0005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7ce5/10987014/f06c935f12d8/nihms-1980020-f0006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7ce5/10987014/242b677dc090/nihms-1980020-f0007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7ce5/10987014/cf92df521792/nihms-1980020-f0008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7ce5/10987014/2f6158abe0aa/nihms-1980020-f0009.jpg

相似文献

1
Muffin: A Framework Toward Multi-Dimension AI Fairness by Uniting Off-the-Shelf Models.松饼:通过整合现成模型实现多维度人工智能公平性的框架。
Proc Des Autom Conf. 2023 Jul;2023. doi: 10.1109/dac56929.2023.10247765. Epub 2023 Sep 15.
2
Achieve fairness without demographics for dermatological disease diagnosis.在不考虑人口统计学因素的情况下实现皮肤病诊断的公平性。
Med Image Anal. 2024 Jul;95:103188. doi: 10.1016/j.media.2024.103188. Epub 2024 May 3.
3
A scoping review of fair machine learning techniques when using real-world data.使用真实世界数据时公平机器学习技术的范围综述。
J Biomed Inform. 2024 Mar;151:104622. doi: 10.1016/j.jbi.2024.104622. Epub 2024 Mar 6.
4
Interpretability and fairness evaluation of deep learning models on MIMIC-IV dataset.深度学习模型在 MIMIC-IV 数据集上的可解释性和公平性评估。
Sci Rep. 2022 May 3;12(1):7166. doi: 10.1038/s41598-022-11012-2.
5
Enhancing fairness in AI-enabled medical systems with the attribute neutral framework.利用属性中立框架增强人工智能医疗系统的公平性。
Nat Commun. 2024 Oct 10;15(1):8767. doi: 10.1038/s41467-024-52930-1.
6
MultiFair: Model Fairness With Multiple Sensitive Attributes.多公平性:具有多个敏感属性的模型公平性
IEEE Trans Neural Netw Learn Syst. 2025 Mar;36(3):5654-5667. doi: 10.1109/TNNLS.2024.3384181. Epub 2025 Feb 28.
7
Search-based Automatic Repair for Fairness and Accuracy in Decision-making Software.用于决策软件中公平性和准确性的基于搜索的自动修复
Empir Softw Eng. 2024;29(1):36. doi: 10.1007/s10664-023-10419-3. Epub 2024 Jan 3.
8
A novel approach for assessing fairness in deployed machine learning algorithms.一种评估已部署机器学习算法公平性的新方法。
Sci Rep. 2024 Aug 1;14(1):17753. doi: 10.1038/s41598-024-68651-w.
9
Fairness-aware recommendation with meta learning.基于元学习的公平感知推荐
Sci Rep. 2024 May 2;14(1):10125. doi: 10.1038/s41598-024-60808-x.
10
Interventional Fairness with Indirect Knowledge of Unobserved Protected Attributes.具有未观察到的受保护属性的间接知识的干预公平性。
Entropy (Basel). 2021 Nov 25;23(12):1571. doi: 10.3390/e23121571.

本文引用的文献

1
Beyond misdiagnosis, misunderstanding and mistrust: relevance of the historical perspective in the medical and mental health treatment of people of color.超越误诊、误解和不信任:历史视角在有色人种医疗和心理健康治疗中的相关性。
J Natl Med Assoc. 2007 Aug;99(8):879-85.