Suppr超能文献

衡量大语言模型中的性别和种族偏见:来自自动化简历评估的交叉性证据。

Measuring gender and racial biases in large language models: Intersectional evidence from automated resume evaluation.

作者信息

An Jiafu, Huang Difang, Lin Chen, Tai Mingzhu

机构信息

Department of Real Estate and Construction, University of Hong Kong, Hong Kong SAR 999077, China.

Academy of Mathematics and Systems Science, Chinese Academy of Sciences, Beijing 100190, China.

出版信息

PNAS Nexus. 2025 Mar 12;4(3):pgaf089. doi: 10.1093/pnasnexus/pgaf089. eCollection 2025 Mar.

Abstract

In traditional decision-making processes, social biases of human decision makers can lead to unequal economic outcomes for underrepresented social groups, such as women and racial/ethnic minorities (1-4). Recently, the growing popularity of large language model (LLM)-based AI signals a potential shift from human to AI-based decision-making. How would this transition affect the distributional outcomes across social groups? Here, we investigate the gender and racial biases of a number of commonly used LLMs, including OpenAI's GPT-3.5 Turbo and GPT-4o, Google's Gemini 1.5 Flash, Anthropic AI's Claude 3.5 Sonnet, and Meta's Llama 3-70b, in a high-stakes decision-making setting of assessing entry-level job candidates from diverse social groups. Instructing the models to score ∼361,000 resumes with randomized social identities, we find that the LLMs award higher assessment scores for female candidates with similar work experience, education, and skills, but lower scores for black male candidates with comparable qualifications. These biases may result in ∼1-3 percentage-point differences in hiring probabilities for otherwise similar candidates at a certain threshold and are consistent across various job positions and subsamples. Meanwhile, many models are biased against black male candidates. Our results indicate that LLM-based AI systems demonstrate significant biases, varying in terms of the directions and magnitudes across different social groups. Further research is needed to comprehend the root causes of these outcomes and develop strategies to minimize the remaining biases in AI systems. As AI-based decision-making tools are increasingly employed across diverse domains, our findings underscore the necessity of understanding and addressing the potential unequal outcomes to ensure equitable outcomes across social groups.

摘要

在传统决策过程中,人类决策者的社会偏见可能会给代表性不足的社会群体带来不平等的经济结果,比如女性以及种族/族裔少数群体(1 - 4)。最近,基于大语言模型(LLM)的人工智能越来越受欢迎,这标志着可能从人类决策转向基于人工智能的决策。这种转变将如何影响不同社会群体的分配结果呢?在此,我们在一个高风险决策场景中,即评估来自不同社会群体的初级求职者时,研究了一些常用大语言模型的性别和种族偏见,这些模型包括OpenAI的GPT - 3.5 Turbo和GPT - 4o、谷歌的Gemini 1.5 Flash、Anthropic AI的Claude 3.5 Sonnet以及Meta的Llama 3 - 70b。我们让这些模型对约36.1万份带有随机设定社会身份的简历进行评分,发现这些大语言模型会给工作经验、教育背景和技能相似的女性求职者给出更高的评估分数,但给资质相当的黑人男性求职者的分数却更低。在某个阈值下,这些偏见可能会导致其他条件相似的求职者在招聘概率上出现1至3个百分点的差异,并且在各种工作岗位和子样本中都是一致的。同时,许多模型对黑人男性求职者存在偏见。我们的结果表明,基于大语言模型的人工智能系统存在显著偏见,不同社会群体在偏见的方向和程度上各不相同。需要进一步研究以理解这些结果的根本原因,并制定策略来尽量减少人工智能系统中剩余的偏见。随着基于人工智能的决策工具在各个领域越来越多地被使用,我们的研究结果强调了理解和解决潜在不平等结果的必要性,以确保不同社会群体都能获得公平的结果。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1cee/11937954/1b031d780666/pgaf089f1.jpg

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验