Suppr超能文献

美国公众对算法决策中准确性与公平性权衡的认知。

Public perception of accuracy-fairness trade-offs in algorithmic decisions in the United States.

作者信息

Mourali Mehdi, Novakowski Dallas, Pogacar Ruth, Brigden Neil

机构信息

Haskayne School of Business, University of Calgary, Calgary, Alberta, Canada.

Bissett School of Business, Mount Royal University, Calgary, Alberta, Canada.

出版信息

PLoS One. 2025 Mar 13;20(3):e0319861. doi: 10.1371/journal.pone.0319861. eCollection 2025.

Abstract

The naive approach to preventing discrimination in algorithmic decision-making is to exclude protected attributes from the model's inputs. This approach, known as "equal treatment," aims to treat all individuals equally regardless of their demographic characteristics. However, this practice can still result in unequal impacts across different groups. Recently, alternative notions of fairness have been proposed to reduce unequal impact. However, these alternative approaches may require sacrificing predictive accuracy. The present research investigates public attitudes toward these trade-offs in the United States. When are individuals more likely to support equal treatment algorithms (ETAs), characterized by higher predictive accuracy, and when do they prefer equal impact algorithms (EIAs) that reduce performance gaps between groups? A randomized conjoint experiment and a follow-up choice experiment revealed that support for the EIAs decreased sharply as their accuracy gap grew, although impact parity was prioritized more when ETAs produced large outcome discrepancies. Additionally, preferences polarized along partisan identities, with Democrats favoring impact parity over accuracy maximization while Republicans displayed the reverse preference. Gender and social justice orientations also significantly predicted EIA support. Overall, findings demonstrate multidimensional drivers of algorithmic fairness attitudes, underscoring divisions around equality versus equity principles. Achieving standards around fair AI requires addressing conflicting human values through good governance.

摘要

防止算法决策中出现歧视的简单方法是在模型输入中排除受保护属性。这种方法被称为“平等待遇”,旨在不论个人的人口特征如何,都平等对待所有人。然而,这种做法仍可能导致不同群体受到不平等影响。最近,人们提出了公平性的其他概念以减少不平等影响。然而,这些替代方法可能需要牺牲预测准确性。本研究调查了美国公众对这些权衡取舍的态度。在什么时候,个体更有可能支持以较高预测准确性为特征的平等待遇算法(ETA),而在什么时候他们更喜欢减少群体间性能差距的平等影响算法(EIA)呢?一项随机联合实验和一项后续选择实验表明,随着EIA的准确性差距增大,对其的支持率急剧下降,尽管当ETA产生较大结果差异时,影响均等性被赋予更高优先级。此外,偏好沿着党派身份两极分化,民主党人更倾向于影响均等而非准确性最大化,而共和党人则表现出相反的偏好。性别和社会正义取向也显著预测了对EIA的支持。总体而言,研究结果表明了算法公平态度的多维度驱动因素,凸显了围绕平等与公平原则的分歧。实现公平人工智能的标准需要通过良好治理来解决相互冲突的人类价值观问题。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/56d7/11906050/be50d6b85c87/pone.0319861.g001.jpg

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验