• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

使用分布不平等度量来衡量内容推荐算法的不同结果。

Measuring disparate outcomes of content recommendation algorithms with distributional inequality metrics.

作者信息

Lazovich Tomo, Belli Luca, Gonzales Aaron, Bower Amanda, Tantipongpipat Uthaipon, Lum Kristian, Huszár Ferenc, Chowdhury Rumman

机构信息

Twitter, Inc., San Francisco, CA 94103, USA.

University of Cambridge, Cambridge, UK.

出版信息

Patterns (N Y). 2022 Aug 12;3(8):100568. doi: 10.1016/j.patter.2022.100568.

DOI:10.1016/j.patter.2022.100568
PMID:36033598
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC9403369/
Abstract

The harmful impacts of algorithmic decision systems have recently come into focus, with many examples of machine learning (ML) models amplifying societal biases. In this paper, we propose adapting income inequality metrics from economics to complement existing model-level fairness metrics, which focus on intergroup differences of model performance. In particular, we evaluate their ability to measure disparities between exposures that individuals receive in a production recommendation system, the Twitter algorithmic timeline. We define desirable criteria for metrics to be used in an operational setting by ML practitioners. We characterize engagements with content on Twitter using these metrics and use the results to evaluate the metrics with respect to our criteria. We also show that we can use these metrics to identify content suggestion algorithms that contribute more strongly to skewed outcomes between users. Overall, we conclude that these metrics can be a useful tool for auditing algorithms in production settings.

摘要

算法决策系统的有害影响最近受到关注,有许多机器学习(ML)模型加剧社会偏见的例子。在本文中,我们提议采用经济学中的收入不平等指标来补充现有的模型层面公平性指标,后者侧重于模型性能的群体间差异。具体而言,我们评估这些指标衡量个体在生产推荐系统(推特算法时间线)中所受曝光差异的能力。我们为ML从业者在实际操作环境中使用的指标定义了理想标准。我们使用这些指标来刻画推特上与内容互动情况,并根据我们的标准用结果评估这些指标。我们还表明,我们可以使用这些指标来识别那些对用户间结果偏差贡献更大的内容推荐算法。总体而言,我们得出结论,这些指标可以成为生产环境中算法审计的有用工具。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e3d0/9403369/fdd598689ab8/gr9.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e3d0/9403369/9278e50d8d8b/gr1.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e3d0/9403369/e4c3b22f4ad5/gr2.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e3d0/9403369/38ddc9200bc6/gr3.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e3d0/9403369/4cbbd6b38577/gr4.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e3d0/9403369/bb8a8e6f83e9/gr5.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e3d0/9403369/e4bbab44dabc/gr6.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e3d0/9403369/845c11c3c6ef/gr7.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e3d0/9403369/e91101ccb5fb/gr8.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e3d0/9403369/fdd598689ab8/gr9.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e3d0/9403369/9278e50d8d8b/gr1.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e3d0/9403369/e4c3b22f4ad5/gr2.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e3d0/9403369/38ddc9200bc6/gr3.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e3d0/9403369/4cbbd6b38577/gr4.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e3d0/9403369/bb8a8e6f83e9/gr5.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e3d0/9403369/e4bbab44dabc/gr6.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e3d0/9403369/845c11c3c6ef/gr7.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e3d0/9403369/e91101ccb5fb/gr8.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e3d0/9403369/fdd598689ab8/gr9.jpg

相似文献

1
Measuring disparate outcomes of content recommendation algorithms with distributional inequality metrics.使用分布不平等度量来衡量内容推荐算法的不同结果。
Patterns (N Y). 2022 Aug 12;3(8):100568. doi: 10.1016/j.patter.2022.100568.
2
Erratum: Measuring disparate outcomes of content recommendation algorithms with distributional inequality metrics.勘误:使用分布不平等度量来衡量内容推荐算法的不同结果。
Patterns (N Y). 2023 Aug 11;4(8):100822. doi: 10.1016/j.patter.2023.100822.
3
Fairness in Mobile Phone-Based Mental Health Assessment Algorithms: Exploratory Study.基于手机的心理健康评估算法中的公平性:探索性研究。
JMIR Form Res. 2022 Jun 14;6(6):e34366. doi: 10.2196/34366.
4
Propagation of societal gender inequality by internet search algorithms.互联网搜索算法对社会性别不平等的传播。
Proc Natl Acad Sci U S A. 2022 Jul 19;119(29):e2204529119. doi: 10.1073/pnas.2204529119. Epub 2022 Jul 12.
5
D-BIAS: A Causality-Based Human-in-the-Loop System for Tackling Algorithmic Bias.D-BIAS:一种基于因果关系的人在回路系统,用于解决算法偏差。
IEEE Trans Vis Comput Graph. 2023 Jan;29(1):473-482. doi: 10.1109/TVCG.2022.3209484. Epub 2022 Dec 16.
6
Algorithmic Individual Fairness and Healthcare: A Scoping Review.算法个体公平性与医疗保健:一项范围综述
medRxiv. 2024 Mar 26:2024.03.25.24304853. doi: 10.1101/2024.03.25.24304853.
7
Bipartite Ranking Fairness Through a Model Agnostic Ordering Adjustment.通过模型无关排序调整实现二分排序公平性
IEEE Trans Pattern Anal Mach Intell. 2023 Nov;45(11):13235-13249. doi: 10.1109/TPAMI.2023.3290949.
8
An empirical characterization of fair machine learning for clinical risk prediction.用于临床风险预测的公平机器学习的实证特征描述。
J Biomed Inform. 2021 Jan;113:103621. doi: 10.1016/j.jbi.2020.103621. Epub 2020 Nov 18.
9
Comparison of Methods to Reduce Bias From Clinical Prediction Models of Postpartum Depression.比较降低产后抑郁临床预测模型偏倚的方法。
JAMA Netw Open. 2021 Apr 1;4(4):e213909. doi: 10.1001/jamanetworkopen.2021.3909.
10
Aligning AI Optimization to Community Well-Being.使人工智能优化与社区福祉保持一致。
Int J Community Wellbeing. 2020;3(4):443-463. doi: 10.1007/s42413-020-00086-3. Epub 2020 Nov 4.

本文引用的文献

1
Gintropy: Gini Index Based Generalization of Entropy.基尼熵:基于基尼指数的熵的推广。
Entropy (Basel). 2020 Aug 10;22(8):879. doi: 10.3390/e22080879.
2
SciPy 1.0: fundamental algorithms for scientific computing in Python.SciPy 1.0:Python 中的科学计算基础算法。
Nat Methods. 2020 Mar;17(3):261-272. doi: 10.1038/s41592-019-0686-2. Epub 2020 Feb 3.