• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

SAF:机器学习开发实践公平性利益相关者协议。

SAF: Stakeholders' Agreement on Fairness in the Practice of Machine Learning Development.

机构信息

University of Notre Dame, Notre Dame, USA.

IQS School of Management, Universitat Ramon Llull, Barcelona, Spain.

出版信息

Sci Eng Ethics. 2023 Jul 24;29(4):29. doi: 10.1007/s11948-023-00448-y.

DOI:10.1007/s11948-023-00448-y
PMID:37486434
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC10366323/
Abstract

This paper clarifies why bias cannot be completely mitigated in Machine Learning (ML) and proposes an end-to-end methodology to translate the ethical principle of justice and fairness into the practice of ML development as an ongoing agreement with stakeholders. The pro-ethical iterative process presented in the paper aims to challenge asymmetric power dynamics in the fairness decision making within ML design and support ML development teams to identify, mitigate and monitor bias at each step of ML systems development. The process also provides guidance on how to explain the always imperfect trade-offs in terms of bias to users.

摘要

本文阐明了为什么机器学习(ML)中无法完全消除偏见,并提出了一种端到端的方法,将公正和公平的伦理原则转化为 ML 开发实践,作为与利益相关者的持续协议。本文提出的符合伦理道德的迭代过程旨在挑战 ML 设计中公平决策中的不对称权力动态,并支持 ML 开发团队在 ML 系统开发的每个步骤中识别、减轻和监控偏见。该过程还提供了如何就偏见问题向用户解释总是存在不完美权衡的指导。

相似文献

1
SAF: Stakeholders' Agreement on Fairness in the Practice of Machine Learning Development.SAF:机器学习开发实践公平性利益相关者协议。
Sci Eng Ethics. 2023 Jul 24;29(4):29. doi: 10.1007/s11948-023-00448-y.
2
Beyond bias and discrimination: redefining the AI ethics principle of fairness in healthcare machine-learning algorithms.超越偏见与歧视:重新定义医疗保健机器学习算法中公平性的人工智能伦理原则。
AI Soc. 2023;38(2):549-563. doi: 10.1007/s00146-022-01455-6. Epub 2022 May 21.
3
Addressing Fairness, Bias, and Appropriate Use of Artificial Intelligence and Machine Learning in Global Health.解决全球卫生领域中人工智能和机器学习的公平性、偏见及合理使用问题。
Front Artif Intell. 2021 Apr 15;3:561802. doi: 10.3389/frai.2020.561802. eCollection 2020.
4
Can medical algorithms be fair? Three ethical quandaries and one dilemma.医疗算法能做到公平吗?三个伦理困境和一个困境。
BMJ Health Care Inform. 2022 Apr;29(1). doi: 10.1136/bmjhci-2021-100445.
5
On Algorithmic Fairness in Medical Practice.医疗实践中的算法公平性问题
Camb Q Healthc Ethics. 2022 Jan;31(1):83-94. doi: 10.1017/S0963180121000839.
6
Fairness in Artificial Intelligence: Regulatory Sanbox Evaluation of Bias Prevention for ECG Classification.人工智能中的公平性:用于 ECG 分类的偏见预防的监管沙盒评估。
Stud Health Technol Inform. 2023 May 18;302:488-489. doi: 10.3233/SHTI230184.
7
An empirical characterization of fair machine learning for clinical risk prediction.用于临床风险预测的公平机器学习的实证特征描述。
J Biomed Inform. 2021 Jan;113:103621. doi: 10.1016/j.jbi.2020.103621. Epub 2020 Nov 18.
8
Trustworthy artificial intelligence and ethical design: public perceptions of trustworthiness of an AI-based decision-support tool in the context of intrapartum care.值得信赖的人工智能和道德设计:公众对基于人工智能的决策支持工具在产时护理背景下的可信度的看法。
BMC Med Ethics. 2023 Jun 20;24(1):42. doi: 10.1186/s12910-023-00917-w.
9
Fairness-aware machine learning engineering: how far are we?公平感知机器学习工程:我们进展到什么程度了?
Empir Softw Eng. 2024;29(1):9. doi: 10.1007/s10664-023-10402-y. Epub 2023 Nov 24.
10
My Model is Unfair, Do People Even Care? Visual Design Affects Trust and Perceived Bias in Machine Learning.我的模型不公平,有人在乎吗?视觉设计会影响机器学习中的信任和感知偏差。
IEEE Trans Vis Comput Graph. 2024 Jan;30(1):327-337. doi: 10.1109/TVCG.2023.3327192. Epub 2023 Dec 27.