• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

D-BIAS:一种基于因果关系的人在回路系统,用于解决算法偏差。

D-BIAS: A Causality-Based Human-in-the-Loop System for Tackling Algorithmic Bias.

作者信息

Ghai Bhavya, Mueller Klaus

出版信息

IEEE Trans Vis Comput Graph. 2023 Jan;29(1):473-482. doi: 10.1109/TVCG.2022.3209484. Epub 2022 Dec 16.

DOI:10.1109/TVCG.2022.3209484
PMID:36155458
Abstract

With the rise of AI, algorithms have become better at learning underlying patterns from the training data including ingrained social biases based on gender, race, etc. Deployment of such algorithms to domains such as hiring, healthcare, law enforcement, etc. has raised serious concerns about fairness, accountability, trust and interpretability in machine learning algorithms. To alleviate this problem, we propose D-BIAS, a visual interactive tool that embodies human-in-the-loop AI approach for auditing and mitigating social biases from tabular datasets. It uses a graphical causal model to represent causal relationships among different features in the dataset and as a medium to inject domain knowledge. A user can detect the presence of bias against a group, say females, or a subgroup, say black females, by identifying unfair causal relationships in the causal network and using an array of fairness metrics. Thereafter, the user can mitigate bias by refining the causal model and acting on the unfair causal edges. For each interaction, say weakening/deleting a biased causal edge, the system uses a novel method to simulate a new (debiased) dataset based on the current causal model while ensuring a minimal change from the original dataset. Users can visually assess the impact of their interactions on different fairness metrics, utility metrics, data distortion, and the underlying data distribution. Once satisfied, they can download the debiased dataset and use it for any downstream application for fairer predictions. We evaluate D-BIAS by conducting experiments on 3 datasets and also a formal user study. We found that D-BIAS helps reduce bias significantly compared to the baseline debiasing approach across different fairness metrics while incurring little data distortion and a small loss in utility. Moreover, our human-in-the-loop based approach significantly outperforms an automated approach on trust, interpretability and accountability.

摘要

随着人工智能的兴起,算法在从训练数据中学习潜在模式方面变得更加出色,这些训练数据包括基于性别、种族等根深蒂固的社会偏见。将此类算法部署到招聘、医疗保健、执法等领域,引发了人们对机器学习算法中的公平性、可问责性、信任和可解释性的严重担忧。为了缓解这一问题,我们提出了D-BIAS,这是一种视觉交互工具,体现了人在回路中的人工智能方法,用于审计和减轻表格数据集中的社会偏见。它使用图形因果模型来表示数据集中不同特征之间的因果关系,并作为注入领域知识的媒介。用户可以通过识别因果网络中的不公平因果关系并使用一系列公平性指标,检测对某个群体(如女性)或子群体(如黑人女性)的偏见。此后,用户可以通过完善因果模型并对不公平的因果边采取行动来减轻偏见。对于每次交互(例如削弱/删除有偏差的因果边),系统使用一种新颖的方法,基于当前因果模型模拟一个新的(无偏差的)数据集,同时确保与原始数据集的变化最小。用户可以直观地评估他们的交互对不同公平性指标、效用指标、数据失真和基础数据分布的影响。一旦满意,他们可以下载无偏差的数据集,并将其用于任何下游应用,以进行更公平的预测。我们通过对3个数据集进行实验以及进行正式的用户研究来评估D-BIAS。我们发现,与基线去偏方法相比,D-BIAS在不同公平性指标上显著有助于减少偏差,同时几乎不会导致数据失真,效用损失也很小。此外,我们基于人在回路中的方法在信任、可解释性和可问责性方面明显优于自动化方法。

相似文献

1
D-BIAS: A Causality-Based Human-in-the-Loop System for Tackling Algorithmic Bias.D-BIAS:一种基于因果关系的人在回路系统,用于解决算法偏差。
IEEE Trans Vis Comput Graph. 2023 Jan;29(1):473-482. doi: 10.1109/TVCG.2022.3209484. Epub 2022 Dec 16.
2
A survey of recent methods for addressing AI fairness and bias in biomedicine.生物医学中解决人工智能公平性和偏见问题的最新方法综述。
J Biomed Inform. 2024 Jun;154:104646. doi: 10.1016/j.jbi.2024.104646. Epub 2024 Apr 25.
3
Causal fairness assessment of treatment allocation with electronic health records.基于电子健康记录的治疗分配的因果公平性评估。
J Biomed Inform. 2024 Jul;155:104656. doi: 10.1016/j.jbi.2024.104656. Epub 2024 May 21.
4
Folic acid supplementation and malaria susceptibility and severity among people taking antifolate antimalarial drugs in endemic areas.在流行地区,服用抗叶酸抗疟药物的人群中,叶酸补充剂与疟疾易感性和严重程度的关系。
Cochrane Database Syst Rev. 2022 Feb 1;2(2022):CD014217. doi: 10.1002/14651858.CD014217.
5
Interpretability and fairness evaluation of deep learning models on MIMIC-IV dataset.深度学习模型在 MIMIC-IV 数据集上的可解释性和公平性评估。
Sci Rep. 2022 May 3;12(1):7166. doi: 10.1038/s41598-022-11012-2.
6
A scoping review of fair machine learning techniques when using real-world data.使用真实世界数据时公平机器学习技术的范围综述。
J Biomed Inform. 2024 Mar;151:104622. doi: 10.1016/j.jbi.2024.104622. Epub 2024 Mar 6.
7
Evaluating and mitigating bias in machine learning models for cardiovascular disease prediction.评估和减轻心血管疾病预测机器学习模型中的偏差。
J Biomed Inform. 2023 Feb;138:104294. doi: 10.1016/j.jbi.2023.104294. Epub 2023 Jan 24.
8
Fairness in Mobile Phone-Based Mental Health Assessment Algorithms: Exploratory Study.基于手机的心理健康评估算法中的公平性:探索性研究。
JMIR Form Res. 2022 Jun 14;6(6):e34366. doi: 10.2196/34366.
9
A survey of recent methods for addressing AI fairness and bias in biomedicine.近期解决生物医学中人工智能公平性和偏差问题的方法综述。
ArXiv. 2024 Feb 13:arXiv:2402.08250v1.
10
Metrics for Dataset Demographic Bias: A Case Study on Facial Expression Recognition.数据集人口统计学偏差指标:以面部表情识别为例
IEEE Trans Pattern Anal Mach Intell. 2024 Aug;46(8):5209-5226. doi: 10.1109/TPAMI.2024.3361979. Epub 2024 Jul 2.

引用本文的文献

1
The Practical, Robust Implementation and Sustainability (PRISM)-capabilities model for use of Artificial Intelligence in community-engaged implementation science research.用于社区参与实施科学研究中人工智能的实用、稳健实施与可持续性(PRISM)能力模型。
Implement Sci. 2025 Aug 7;20(1):37. doi: 10.1186/s13012-025-01447-2.
2
Bias Mitigation in Primary Health Care Artificial Intelligence Models: Scoping Review.初级卫生保健人工智能模型中的偏差缓解:范围综述
J Med Internet Res. 2025 Jan 7;27:e60269. doi: 10.2196/60269.
3
The Algorithmic Divide: A Systematic Review on AI-Driven Racial Disparities in Healthcare.
算法鸿沟:关于人工智能驱动的医疗保健领域种族差异的系统综述
J Racial Ethn Health Disparities. 2024 Dec 18. doi: 10.1007/s40615-024-02237-0.
4
Translation of AI into oncology clinical practice.人工智能在肿瘤临床实践中的应用。
Oncogene. 2023 Oct;42(42):3089-3097. doi: 10.1038/s41388-023-02826-z. Epub 2023 Sep 8.
5
Guiding principles for the responsible development of artificial intelligence tools for healthcare.医疗保健领域人工智能工具负责任开发的指导原则。
Commun Med (Lond). 2023 Apr 1;3(1):47. doi: 10.1038/s43856-023-00279-9.