• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

相似文献

1
On Consequentialism and Fairness.论结果主义与公平性。
Front Artif Intell. 2020 May 8;3:34. doi: 10.3389/frai.2020.00034. eCollection 2020.
2
Beyond bias and discrimination: redefining the AI ethics principle of fairness in healthcare machine-learning algorithms.超越偏见与歧视:重新定义医疗保健机器学习算法中公平性的人工智能伦理原则。
AI Soc. 2023;38(2):549-563. doi: 10.1007/s00146-022-01455-6. Epub 2022 May 21.
3
An empirical characterization of fair machine learning for clinical risk prediction.用于临床风险预测的公平机器学习的实证特征描述。
J Biomed Inform. 2021 Jan;113:103621. doi: 10.1016/j.jbi.2020.103621. Epub 2020 Nov 18.
4
A novel approach for assessing fairness in deployed machine learning algorithms.一种评估已部署机器学习算法公平性的新方法。
Sci Rep. 2024 Aug 1;14(1):17753. doi: 10.1038/s41598-024-68651-w.
5
Fairness-aware machine learning engineering: how far are we?公平感知机器学习工程:我们进展到什么程度了?
Empir Softw Eng. 2024;29(1):9. doi: 10.1007/s10664-023-10402-y. Epub 2023 Nov 24.
6
Learning Fair Representations via Distance Correlation Minimization.通过最小化距离相关性学习公平表示。
IEEE Trans Neural Netw Learn Syst. 2024 Feb;35(2):2139-2152. doi: 10.1109/TNNLS.2022.3187165. Epub 2024 Feb 5.
7
[Economic efficiency and fairness: two ethical criteria? A basic reflection].[经济效率与公平:两个伦理标准?一种基本思考]
Gesundheitswesen. 2005 May;67(5):325-31. doi: 10.1055/s-2005-858221.
8
Can medical algorithms be fair? Three ethical quandaries and one dilemma.医疗算法能做到公平吗?三个伦理困境和一个困境。
BMJ Health Care Inform. 2022 Apr;29(1). doi: 10.1136/bmjhci-2021-100445.
9
Consequentialism and the Synthetic Biology Problem.后果主义与合成生物学问题。
Camb Q Healthc Ethics. 2017 Apr;26(2):206-229. doi: 10.1017/S0963180116000815.
10
Toward Learning Trustworthily from Data Combining Privacy, Fairness, and Explainability: An Application to Face Recognition.迈向从融合隐私、公平性和可解释性的数据中可靠学习:人脸识别应用
Entropy (Basel). 2021 Aug 14;23(8):1047. doi: 10.3390/e23081047.

引用本文的文献

1
Animal Research Regulation: Improving Decision-Making and Adopting a Transparent System to Address Concerns around Approval Rate of Experiments.动物研究监管:改善决策制定并采用透明系统以解决对实验批准率的担忧。
Animals (Basel). 2024 Mar 9;14(6):846. doi: 10.3390/ani14060846.
2
Towards an understanding of global brain data governance: ethical positions that underpin global brain data governance discourse.迈向对全球脑数据治理的理解:支撑全球脑数据治理话语的伦理立场。
Front Big Data. 2023 Nov 9;6:1240660. doi: 10.3389/fdata.2023.1240660. eCollection 2023.
3
Five sources of bias in natural language processing.自然语言处理中偏见的五个来源。
Lang Linguist Compass. 2021 Aug;15(8):e12432. doi: 10.1111/lnc3.12432. Epub 2021 Aug 20.
4
Are AI systems biased against the poor? A machine learning analysis using Word2Vec and GloVe embeddings.人工智能系统是否对穷人存在偏见?一项使用Word2Vec和GloVe词嵌入的机器学习分析。
AI Soc. 2022 Jun 28:1-16. doi: 10.1007/s00146-022-01494-z.

本文引用的文献

1
Is there a social cost of randomization?随机分组存在社会成本吗?
Soc Choice Welfare. 2019 Apr;52(4):709-739. doi: 10.1007/s00355-018-1168-7. Epub 2019 Jan 24.
2
Fair Prediction with Disparate Impact: A Study of Bias in Recidivism Prediction Instruments.公平预测与差异影响:累犯预测工具中的偏见研究。
Big Data. 2017 Jun;5(2):153-163. doi: 10.1089/big.2016.0047.
3
Adaptive design of confirmatory trials: Advances and challenges.确证性试验的适应性设计:进展与挑战
Contemp Clin Trials. 2015 Nov;45(Pt A):93-102. doi: 10.1016/j.cct.2015.06.007. Epub 2015 Jun 14.
4
How (and where) does moral judgment work?道德判断是如何(以及在何处)起作用的?
Trends Cogn Sci. 2002 Dec 1;6(12):517-523. doi: 10.1016/s1364-6613(02)02011-9.
5
Equipoise and the ethics of clinical research.equipoise与临床研究伦理
N Engl J Med. 1987 Jul 16;317(3):141-5. doi: 10.1056/NEJM198707163170304.

论结果主义与公平性。

On Consequentialism and Fairness.

作者信息

Card Dallas, Smith Noah A

机构信息

Computer Science Department, Stanford University, Stanford, CA, United States.

Paul G. Allen School of Computer Science & Engineering, University of Washington, Seattle, WA, United States.

出版信息

Front Artif Intell. 2020 May 8;3:34. doi: 10.3389/frai.2020.00034. eCollection 2020.

DOI:10.3389/frai.2020.00034
PMID:33733152
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC7861221/
Abstract

Recent work on fairness in machine learning has primarily emphasized how to define, quantify, and encourage "fair" outcomes. Less attention has been paid, however, to the ethical foundations which underlie such efforts. Among the ethical perspectives that should be taken into consideration is , the position that, roughly speaking, outcomes are all that matter. Although consequentialism is not free from difficulties, and although it does not necessarily provide a tractable way of choosing actions (because of the combined problems of uncertainty, subjectivity, and aggregation), it nevertheless provides a powerful foundation from which to critique the existing literature on machine learning fairness. Moreover, it brings to the fore some of the tradeoffs involved, including the problem of who counts, the pros and cons of using a policy, and the relative value of the distant future. In this paper we provide a consequentialist critique of common definitions of fairness within machine learning, as well as a machine learning perspective on consequentialism. We conclude with a broader discussion of the issues of learning and randomization, which have important implications for the ethics of automated decision making systems.

摘要

近期机器学习领域关于公平性的研究主要集中在如何定义、量化和促进“公平”结果。然而,对于这些努力背后的伦理基础关注较少。在应考虑的伦理视角中,有一种观点大致认为结果是唯一重要的,即结果主义。尽管结果主义并非没有困难,且不一定能提供一种可行的行动选择方式(由于不确定性、主观性和聚合性等综合问题),但它仍然为批判现有的机器学习公平性文献提供了一个有力的基础。此外,它还凸显了一些涉及的权衡,包括谁被纳入考量的问题、使用一种策略的利弊以及遥远未来的相对价值。在本文中,我们从结果主义的角度对机器学习中公平性的常见定义进行批判,并从机器学习的角度审视结果主义。我们最后对学习和随机化问题进行更广泛的讨论,这些问题对自动化决策系统的伦理有重要影响。