文献检索文档翻译深度研究
Suppr Zotero 插件Zotero 插件
邀请有礼套餐&价格历史记录

新学期,新优惠

限时优惠:9月1日-9月22日

30天高级会员仅需29元

1天体验卡首发特惠仅需5.99元

了解详情
不再提醒
插件&应用
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
高级版
套餐订阅购买积分包
AI 工具
文献检索文档翻译深度研究
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2025

Humans versus machines: Who is perceived to decide fairer? Experimental evidence on attitudes toward automated decision-making.

作者信息

Kern Christoph, Gerdon Frederic, Bach Ruben L, Keusch Florian, Kreuter Frauke

机构信息

School of Social Sciences, University of Mannheim, A5, 6, 68159 Mannheim, Germany.

Mannheim Centre for European Social Research (MZES), University of Mannheim, A5, 6, 68159 Mannheim, Germany.

出版信息

Patterns (N Y). 2022 Sep 29;3(10):100591. doi: 10.1016/j.patter.2022.100591. eCollection 2022 Oct 14.


DOI:10.1016/j.patter.2022.100591
PMID:36277823
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC9583126/
Abstract

Human perceptions of fairness in (semi-)automated decision-making (ADM) constitute a crucial building block toward developing human-centered ADM solutions. However, measuring fairness perceptions is challenging because various context and design characteristics of ADM systems need to be disentangled. Particularly, ADM applications need to use the right degree of automation and granularity of data input to achieve efficiency and public acceptance. We present results from a large-scale vignette experiment that assessed fairness perceptions and the acceptability of ADM systems. The experiment varied context and design dimensions, with an emphasis on who makes the final decision. We show that automated recommendations in combination with a final human decider are perceived as fair as decisions made by a dominant human decider and as fairer than decisions made only by an algorithm. Our results shed light on the context dependence of fairness assessments and show that semi-automation of decision-making processes is often desirable.

摘要
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a701/9583126/592921c3527b/gr3.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a701/9583126/4e22378993ba/fx1.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a701/9583126/5ce94ba83fec/gr1.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a701/9583126/1f3ec76b50d7/gr2.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a701/9583126/592921c3527b/gr3.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a701/9583126/4e22378993ba/fx1.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a701/9583126/5ce94ba83fec/gr1.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a701/9583126/1f3ec76b50d7/gr2.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a701/9583126/592921c3527b/gr3.jpg

相似文献

[1]
Humans versus machines: Who is perceived to decide fairer? Experimental evidence on attitudes toward automated decision-making.

Patterns (N Y). 2022-9-29

[2]
Inequality threat increases laypeople's, but not judges', acceptance of algorithmic decision making in court.

Law Hum Behav. 2024

[3]
Algorithms in the court: does it matter which part of the judicial decision-making is automated?

Artif Intell Law (Dordr). 2023-1-8

[4]
Applicant Fairness Perceptions of a Robot-Mediated Job Interview: A Video Vignette-Based Experimental Survey.

Front Robot AI. 2020-11-11

[5]
Fairness-aware machine learning engineering: how far are we?

Empir Softw Eng. 2024

[6]
From fair predictions to just decisions? Conceptualizing algorithmic fairness and distributive justice in the context of data-driven decision-making.

Front Sociol. 2022-10-10

[7]
Artificial fairness? Trust in algorithmic police decision-making.

J Exp Criminol. 2023

[8]
Conceptualizing Automated Decision-Making in Organizational Contexts.

Philos Technol. 2024

[9]
FairSight: Visual Analytics for Fairness in Decision Making.

IEEE Trans Vis Comput Graph. 2019-8-19

[10]
Models and Mechanisms for Spatial Data Fairness.

Proceedings VLDB Endowment. 2022-10

引用本文的文献

[1]
Micro-narratives: A Scalable Method for Eliciting Stories of People's Lived Experience.

Proc SIGCHI Conf Hum Factor Comput Syst. 2025

[2]
Model interpretability enhances domain generalization in the case of textual complexity modeling.

Patterns (N Y). 2025-2-6

[3]
Artificial intelligence and telemedicine in the field of anaesthesiology, intensive care and pain medicine: A European survey.

Eur J Anaesthesiol Intensive Care. 2023-8-10

[4]
Deep learning models for tendinopathy detection: a systematic review and meta-analysis of diagnostic tests.

EFORT Open Rev. 2024-10-3

本文引用的文献

[1]
How transparency modulates trust in artificial intelligence.

Patterns (N Y). 2022-2-24

[2]
Challenging presumed technological superiority when working with (artificial) colleagues.

Sci Rep. 2022-3-8

[3]
Preferences and beliefs in ingroup favoritism.

Front Behav Neurosci. 2015-2-13

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

推荐工具

医学文档翻译智能文献检索