Suppr超能文献

松饼:通过整合现成模型实现多维度人工智能公平性的框架。

Muffin: A Framework Toward Multi-Dimension AI Fairness by Uniting Off-the-Shelf Models.

作者信息

Sheng Yi, Yang Junhuan, Yang Lei, Shi Yiyu, Hu Jingtong, Jiang Weiwen

机构信息

George Mason University.

University of Notre Dame.

出版信息

Proc Des Autom Conf. 2023 Jul;2023. doi: 10.1109/dac56929.2023.10247765. Epub 2023 Sep 15.

Abstract

Model fairness (a.k.a., bias) has become one of the most critical problems in a wide range of AI applications. An unfair model in autonomous driving may cause a traffic accident if corner cases (e.g., extreme weather) cannot be fairly regarded; or it will incur healthcare disparities if the AI model misdiagnoses a certain group of people (e.g., brown and black skin). In recent years, there are emerging research works on addressing unfairness, and they mainly focus on a single unfair attribute, like skin tone; however, real-world data commonly have multiple attributes, among which unfairness can exist in more than one attribute, called "multi-dimensional fairness". In this paper, we first reveal a strong correlation between the different unfair attributes, i.e., optimizing fairness on one attribute will lead to the collapse of others. Then, we propose a novel Multi-Dimension Fairness framework, namely which includes an automatic tool to unite off-the-shelf models to improve the fairness on multiple attributes simultaneously. Case studies on dermatology datasets with two unfair attributes show that the existing approach can achieve 21.05% fairness improvement on the first attribute while it makes the second attribute unfair by 1.85%. On the other hand, the proposed can unite multiple models to achieve simultaneously 26.32% and 20.37% fairness improvement on both attributes; meanwhile, it obtains 5.58% accuracy gain.

摘要

模型公平性(又称偏差)已成为广泛的人工智能应用中最关键的问题之一。如果自动驾驶中的模型不公平,在遇到特殊情况(如极端天气)时无法公平对待,可能会导致交通事故;或者,如果人工智能模型误诊某一特定群体(如棕色和黑色皮肤人群),就会造成医疗保健方面的差异。近年来,出现了一些关于解决不公平性的研究工作,它们主要关注单一的不公平属性,如肤色;然而,现实世界的数据通常具有多个属性,其中可能不止一个属性存在不公平性,即“多维度公平性”。在本文中,我们首先揭示了不同不公平属性之间的强相关性,即优化一个属性的公平性会导致其他属性的公平性崩溃。然后,我们提出了一种新颖的多维度公平性框架,即,它包括一个自动工具,用于整合现成的模型,以同时提高多个属性的公平性。对具有两个不公平属性的皮肤病学数据集的案例研究表明,现有方法在第一个属性上可实现21.05%的公平性提升,但会使第二个属性的不公平性增加1.85%。另一方面,所提出的框架可以整合多个模型,在两个属性上同时实现26.32%和20.37%的公平性提升;同时,它还能提高5.58%的准确率。

相似文献

6
MultiFair: Model Fairness With Multiple Sensitive Attributes.多公平性:具有多个敏感属性的模型公平性
IEEE Trans Neural Netw Learn Syst. 2025 Mar;36(3):5654-5667. doi: 10.1109/TNNLS.2024.3384181. Epub 2025 Feb 28.
9
Fairness-aware recommendation with meta learning.基于元学习的公平感知推荐
Sci Rep. 2024 May 2;14(1):10125. doi: 10.1038/s41598-024-60808-x.

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验