• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

我的模型不公平,有人在乎吗?视觉设计会影响机器学习中的信任和感知偏差。

My Model is Unfair, Do People Even Care? Visual Design Affects Trust and Perceived Bias in Machine Learning.

出版信息

IEEE Trans Vis Comput Graph. 2024 Jan;30(1):327-337. doi: 10.1109/TVCG.2023.3327192. Epub 2023 Dec 27.

DOI:10.1109/TVCG.2023.3327192
PMID:37878441
Abstract

Machine learning technology has become ubiquitous, but, unfortunately, often exhibits bias. As a consequence, disparate stakeholders need to interact with and make informed decisions about using machine learning models in everyday systems. Visualization technology can support stakeholders in understanding and evaluating trade-offs between, for example, accuracy and fairness of models. This paper aims to empirically answer "Can visualization design choices affect a stakeholder's perception of model bias, trust in a model, and willingness to adopt a model?" Through a series of controlled, crowd-sourced experiments with more than 1,500 participants, we identify a set of strategies people follow in deciding which models to trust. Our results show that men and women prioritize fairness and performance differently and that visual design choices significantly affect that prioritization. For example, women trust fairer models more often than men do, participants value fairness more when it is explained using text than as a bar chart, and being explicitly told a model is biased has a bigger impact than showing past biased performance. We test the generalizability of our results by comparing the effect of multiple textual and visual design choices and offer potential explanations of the cognitive mechanisms behind the difference in fairness perception and trust. Our research guides design considerations to support future work developing visualization systems for machine learning.

摘要

机器学习技术已经无处不在,但不幸的是,它常常存在偏见。因此,不同的利益相关者需要互动,并就如何在日常系统中使用机器学习模型做出明智的决策。可视化技术可以支持利益相关者理解和评估模型的准确性和公平性等方面之间的权衡。本文旨在通过一系列有超过 1500 名参与者参与的受控、众包实验,从经验上回答“可视化设计选择是否会影响利益相关者对模型偏差的感知、对模型的信任以及采用模型的意愿?”我们确定了人们在决定信任哪些模型时遵循的一系列策略。我们的研究结果表明,男性和女性对公平性和性能的重视程度不同,并且视觉设计选择会显著影响这种优先级排序。例如,女性比男性更信任公平性更高的模型,参与者更重视通过文本而不是条形图来解释的公平性,并且明确告知模型存在偏差比显示过去的偏差性能产生更大的影响。我们通过比较多种文本和视觉设计选择的效果来测试我们结果的普遍性,并为公平感知和信任背后的认知机制差异提供潜在的解释。我们的研究为开发机器学习可视化系统的未来工作提供了设计考虑因素的指导。

相似文献

1
My Model is Unfair, Do People Even Care? Visual Design Affects Trust and Perceived Bias in Machine Learning.我的模型不公平,有人在乎吗?视觉设计会影响机器学习中的信任和感知偏差。
IEEE Trans Vis Comput Graph. 2024 Jan;30(1):327-337. doi: 10.1109/TVCG.2023.3327192. Epub 2023 Dec 27.
2
D-BIAS: A Causality-Based Human-in-the-Loop System for Tackling Algorithmic Bias.D-BIAS:一种基于因果关系的人在回路系统,用于解决算法偏差。
IEEE Trans Vis Comput Graph. 2023 Jan;29(1):473-482. doi: 10.1109/TVCG.2022.3209484. Epub 2022 Dec 16.
3
Evaluating machine learning model bias and racial disparities in non-small cell lung cancer using SEER registry data.利用监测、流行病学和最终结果(SEER)登记数据评估非小细胞肺癌中机器学习模型的偏差和种族差异。
Health Care Manag Sci. 2024 Dec;27(4):631-649. doi: 10.1007/s10729-024-09691-6. Epub 2024 Nov 4.
4
Evaluating and mitigating bias in machine learning models for cardiovascular disease prediction.评估和减轻心血管疾病预测机器学习模型中的偏差。
J Biomed Inform. 2023 Feb;138:104294. doi: 10.1016/j.jbi.2023.104294. Epub 2023 Jan 24.
5
SAF: Stakeholders' Agreement on Fairness in the Practice of Machine Learning Development.SAF:机器学习开发实践公平性利益相关者协议。
Sci Eng Ethics. 2023 Jul 24;29(4):29. doi: 10.1007/s11948-023-00448-y.
6
The Impact of Information Relevancy and Interactivity on Intensivists' Trust in a Machine Learning-Based Bacteremia Prediction System: Simulation Study.基于机器学习的菌血症预测系统对重症监护医生信任的影响:模拟研究
JMIR Hum Factors. 2024 Aug 1;11:e56924. doi: 10.2196/56924.
7
Trust in decision-making authorities dictates the form of the interactive relationship between outcome fairness and procedural fairness.对决策当局的信任决定了结果公平与程序公平之间互动关系的形式。
Pers Soc Psychol Bull. 2015 Jan;41(1):19-34. doi: 10.1177/0146167214556237. Epub 2014 Nov 11.
8
Trust and Acceptance Challenges in the Adoption of AI Applications in Health Care: Quantitative Survey Analysis.医疗保健领域采用人工智能应用中的信任与接受挑战:定量调查分析
J Med Internet Res. 2025 Mar 21;27:e65567. doi: 10.2196/65567.
9
A scoping review of fair machine learning techniques when using real-world data.使用真实世界数据时公平机器学习技术的范围综述。
J Biomed Inform. 2024 Mar;151:104622. doi: 10.1016/j.jbi.2024.104622. Epub 2024 Mar 6.
10
Multiple Forecast Visualizations (MFVs): Trade-offs in Trust and Performance in Multiple COVID-19 Forecast Visualizations.多重预测可视化(MFV):多种新冠疫情预测可视化中的信任与性能权衡
IEEE Trans Vis Comput Graph. 2023 Jan;29(1):12-22. doi: 10.1109/TVCG.2022.3209457. Epub 2022 Dec 16.