• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

人工智能生成的面孔会影响性别刻板印象和种族同质化。

AI-generated faces influence gender stereotypes and racial homogenization.

作者信息

AlDahoul Nouar, Rahwan Talal, Zaki Yasir

机构信息

New York University, Abu Dhabi, UAE.

出版信息

Sci Rep. 2025 Apr 25;15(1):14449. doi: 10.1038/s41598-025-99623-3.

DOI:10.1038/s41598-025-99623-3
PMID:40281283
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC12032156/
Abstract

Text-to-image generative AI models such as Stable Diffusion are used daily by millions worldwide. However, the extent to which these models exhibit racial and gender stereotypes is not yet fully understood. Here, we document significant biases in Stable Diffusion across six races, two genders, 32 professions, and eight attributes. Additionally, we examine the degree to which Stable Diffusion depicts individuals of the same race as being similar to one another. This analysis reveals significant racial homogenization, e.g., depicting nearly all Middle Eastern men as bearded, brown-skinned, and wearing traditional attire. We then propose debiasing solutions that allow users to specify the desired distributions of race and gender when generating images while minimizing racial homogenization. Finally, using a preregistered survey experiment, we find evidence that being presented with inclusive AI-generated faces reduces people's racial and gender biases, while being presented with non-inclusive ones increases such biases, regardless of whether the images are labeled as AI-generated. Taken together, our findings emphasize the need to address biases and stereotypes in text-to-image models.

摘要

诸如Stable Diffusion这样的文本到图像生成式人工智能模型,每天都被全球数百万人使用。然而,这些模型表现出种族和性别刻板印象的程度尚未得到充分了解。在这里,我们记录了Stable Diffusion在六个种族、两种性别、32种职业和八个属性方面存在的显著偏差。此外,我们还研究了Stable Diffusion将同一种族的个体描绘得彼此相似的程度。这一分析揭示了显著的种族同质化现象,例如,几乎将所有中东男性都描绘成留着胡须、棕色皮肤且穿着传统服装的形象。然后,我们提出了去偏解决方案,允许用户在生成图像时指定所需的种族和性别分布,同时尽量减少种族同质化。最后,通过一项预先注册的调查实验,我们发现有证据表明,展示包容性的人工智能生成的面孔会减少人们的种族和性别偏见,而展示非包容性的面孔则会增加这种偏见,无论这些图像是否被标记为人工智能生成的。综上所述,我们的研究结果强调了解决文本到图像模型中的偏见和刻板印象的必要性。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7056/12032156/2f28ddfc4256/41598_2025_99623_Fig5_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7056/12032156/cebfe2de61af/41598_2025_99623_Fig1_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7056/12032156/98cacb51d20b/41598_2025_99623_Fig2_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7056/12032156/fff2ef2ea88b/41598_2025_99623_Fig3_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7056/12032156/0750b2d7be03/41598_2025_99623_Fig4_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7056/12032156/2f28ddfc4256/41598_2025_99623_Fig5_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7056/12032156/cebfe2de61af/41598_2025_99623_Fig1_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7056/12032156/98cacb51d20b/41598_2025_99623_Fig2_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7056/12032156/fff2ef2ea88b/41598_2025_99623_Fig3_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7056/12032156/0750b2d7be03/41598_2025_99623_Fig4_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7056/12032156/2f28ddfc4256/41598_2025_99623_Fig5_HTML.jpg

相似文献

1
AI-generated faces influence gender stereotypes and racial homogenization.人工智能生成的面孔会影响性别刻板印象和种族同质化。
Sci Rep. 2025 Apr 25;15(1):14449. doi: 10.1038/s41598-025-99623-3.
2
Artificial Intelligence Portrayals in Orthopaedic Surgery: An Analysis of Gender and Racial Diversity Using Text-to-Image Generators.骨科手术中的人工智能描绘:使用文本到图像生成器对性别和种族多样性的分析
J Bone Joint Surg Am. 2024 Dec 4;106(23):2278-2285. doi: 10.2106/JBJS.24.00150. Epub 2024 Jul 18.
3
Gender and Ethnicity Bias of Text-to-Image Generative Artificial Intelligence in Medical Imaging, Part 1: Preliminary Evaluation.医学成像中基于文本生成图像的人工智能的性别和种族偏见,第1部分:初步评估
J Nucl Med Technol. 2024 Dec 4;52(4):356-359. doi: 10.2967/jnmt.124.268332.
4
Representation of intensivists' race/ethnicity, sex, and age by artificial intelligence: a cross-sectional study of two text-to-image models.人工智能对重症监护医师种族/民族、性别和年龄的代表性:对两个文本到图像模型的横断面研究。
Crit Care. 2024 Nov 11;28(1):363. doi: 10.1186/s13054-024-05134-4.
5
Demographic Representation in 3 Leading Artificial Intelligence Text-to-Image Generators.三种主流人工智能文本转图像生成器中的人口代表性。
JAMA Surg. 2024 Jan 1;159(1):87-95. doi: 10.1001/jamasurg.2023.5695.
6
Black and White Adults' Racial and Gender Stereotypes of Psychopathology Symptoms in Black and White Children.黑人和白人成年人对黑人和白人儿童精神病理学症状的种族和性别刻板印象。
Res Child Adolesc Psychopathol. 2024 Jul;52(7):1023-1036. doi: 10.1007/s10802-024-01189-7. Epub 2024 Mar 16.
7
Are Black Women and Girls Associated With Danger? Implicit Racial Bias at the Intersection of Target Age and Gender.黑人女性和女孩是否与危险有关?目标年龄和性别交叉处的隐性种族偏见。
Pers Soc Psychol Bull. 2019 Oct;45(10):1427-1439. doi: 10.1177/0146167219829182. Epub 2019 Mar 21.
8
What's in a Name? Experimental Evidence of Gender Bias in Recommendation Letters Generated by ChatGPT.名字里的乾坤:ChatGPT 生成的推荐信中的性别偏见的实验证据。
J Med Internet Res. 2024 Mar 5;26:e51837. doi: 10.2196/51837.
9
Gender and ethnicity bias in generative artificial intelligence text-to-image depiction of pharmacists.生成式人工智能文本到图像描述药剂师中的性别和种族偏见。
Int J Pharm Pract. 2024 Nov 14;32(6):524-531. doi: 10.1093/ijpp/riae049.
10
Category salience and racial bias in weapon identification: A diffusion modeling approach.武器识别中的类别显著性和种族偏见:一种扩散模型方法。
J Pers Soc Psychol. 2021 Mar;120(3):672-693. doi: 10.1037/pspi0000279. Epub 2020 Jul 13.

引用本文的文献

1
Beyond the Algorithm: A Perspective on Tackling Bias and Cultural Sensitivity in AI-Guided Aesthetic Standards for Cosmetic Surgery in the Middle East and North Africa (MENA) Region.超越算法:关于解决中东和北非(MENA)地区人工智能引导的美容手术美学标准中的偏见和文化敏感性的观点。
Clin Cosmet Investig Dermatol. 2025 Sep 4;18:2173-2182. doi: 10.2147/CCID.S543045. eCollection 2025.

本文引用的文献

1
AI generates covertly racist decisions about people based on their dialect.人工智能根据人们的方言生成关于他们的隐性种族主义决策。
Nature. 2024 Sep;633(8028):147-154. doi: 10.1038/s41586-024-07856-5. Epub 2024 Aug 28.
2
The effect of implicit racial bias on recognition of other-race faces.内隐种族偏见对异族面孔识别的影响。
Cogn Res Princ Implic. 2021 Oct 30;6(1):67. doi: 10.1186/s41235-021-00337-7.
3
A Memory Computational Basis for the Other-Race Effect.种族内效应的记忆计算基础。
Sci Rep. 2019 Dec 18;9(1):19399. doi: 10.1038/s41598-019-55350-0.
4
The accuracy, fairness, and limits of predicting recidivism.预测累犯的准确性、公正性和局限性。
Sci Adv. 2018 Jan 17;4(1):eaao5580. doi: 10.1126/sciadv.aao5580. eCollection 2018 Jan.
5
Adolescent Girls' STEM Identity Formation and Media Images of STEM Professionals: Considering the Influence of Contextual Cues.青少年女孩的STEM身份形成与STEM专业人员的媒体形象:考量情境线索的影响
Front Psychol. 2017 May 26;8:716. doi: 10.3389/fpsyg.2017.00716. eCollection 2017.
6
What happens before? A field experiment exploring how pay and representation differentially shape bias on the pathway into organizations.之前发生了什么?一项实地实验探索了薪酬和代表性如何不同地影响进入组织的途径中的偏见。
J Appl Psychol. 2015 Nov;100(6):1678-712. doi: 10.1037/apl0000022. Epub 2015 Apr 13.
7
The Chicago face database: A free stimulus set of faces and norming data.芝加哥面部数据库:一个免费的面部刺激数据集和标准化数据。
Behav Res Methods. 2015 Dec;47(4):1122-1135. doi: 10.3758/s13428-014-0532-5.
8
Science faculty's subtle gender biases favor male students.理科教员微妙的性别偏见偏爱男学生。
Proc Natl Acad Sci U S A. 2012 Oct 9;109(41):16474-9. doi: 10.1073/pnas.1211286109. Epub 2012 Sep 17.
9
Implicit social cognition: attitudes, self-esteem, and stereotypes.内隐社会认知:态度、自尊与刻板印象。
Psychol Rev. 1995 Jan;102(1):4-27. doi: 10.1037/0033-295x.102.1.4.
10
Stereotype threat and the intellectual test performance of African Americans.刻板印象威胁与非裔美国人的智力测试表现。
J Pers Soc Psychol. 1995 Nov;69(5):797-811. doi: 10.1037//0022-3514.69.5.797.