• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

幽默作为洞察生成式人工智能偏见的窗口。

Humor as a window into generative AI bias.

作者信息

Saumure Roger, De Freitas Julian, Puntoni Stefano

机构信息

Department of Marketing, The Wharton School, University of Pennsylvania, Philadelphia, PA, USA.

Department of Marketing, Harvard Business School, Harvard University, Boston, MA, USA.

出版信息

Sci Rep. 2025 Jan 8;15(1):1326. doi: 10.1038/s41598-024-83384-6.

DOI:10.1038/s41598-024-83384-6
PMID:39779743
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC11711456/
Abstract

A preregistered audit of 600 images by generative AI across 150 different prompts explores the link between humor and discrimination in consumer-facing AI solutions. When ChatGPT updates images to make them "funnier", the prevalence of stereotyped groups changes. While stereotyped groups for politically sensitive traits (i.e., race and gender) are less likely to be represented after making an image funnier, stereotyped groups for less politically sensitive traits (i.e., older, visually impaired, and people with high body weight groups) are more likely to be represented.

摘要

一项对生成式人工智能生成的600张图像进行的预先注册审核,涵盖150个不同的提示词,探讨了面向消费者的人工智能解决方案中幽默与歧视之间的联系。当ChatGPT更新图像使其“更有趣”时,刻板印象群体的比例会发生变化。在使图像变得更有趣之后,政治敏感特征(即种族和性别)的刻板印象群体出现的可能性降低,而政治敏感程度较低的特征(即老年人、视力障碍者和高体重人群)的刻板印象群体出现的可能性增加。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8d15/11711456/0ec7d5d99dca/41598_2024_83384_Fig2_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8d15/11711456/a709ccb546e4/41598_2024_83384_Fig1_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8d15/11711456/0ec7d5d99dca/41598_2024_83384_Fig2_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8d15/11711456/a709ccb546e4/41598_2024_83384_Fig1_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8d15/11711456/0ec7d5d99dca/41598_2024_83384_Fig2_HTML.jpg

相似文献

1
Humor as a window into generative AI bias.幽默作为洞察生成式人工智能偏见的窗口。
Sci Rep. 2025 Jan 8;15(1):1326. doi: 10.1038/s41598-024-83384-6.
2
What's in a Name? Experimental Evidence of Gender Bias in Recommendation Letters Generated by ChatGPT.名字里的乾坤:ChatGPT 生成的推荐信中的性别偏见的实验证据。
J Med Internet Res. 2024 Mar 5;26:e51837. doi: 10.2196/51837.
3
Who's funny: gender stereotypes, humor production, and memory bias.谁更幽默:性别刻板印象、幽默创作与记忆偏差
Psychon Bull Rev. 2012 Feb;19(1):108-12. doi: 10.3758/s13423-011-0161-2.
4
Gender and Ethnicity Bias of Text-to-Image Generative Artificial Intelligence in Medical Imaging, Part 1: Preliminary Evaluation.医学成像中基于文本生成图像的人工智能的性别和种族偏见,第1部分:初步评估
J Nucl Med Technol. 2024 Dec 4;52(4):356-359. doi: 10.2967/jnmt.124.268332.
5
Perceptions of AI engaging in human expression.对人工智能进行人类表达的认知。
Sci Rep. 2021 Oct 27;11(1):21181. doi: 10.1038/s41598-021-00426-z.
6
Generative artificial intelligence versus clinicians: Who diagnoses multiple sclerosis faster and with greater accuracy?生成式人工智能与临床医生:谁能更快、更准确地诊断多发性硬化症?
Mult Scler Relat Disord. 2024 Oct;90:105791. doi: 10.1016/j.msard.2024.105791. Epub 2024 Aug 6.
7
How funny is ChatGPT? A comparison of human- and A.I.-produced jokes.ChatGPT 有多搞笑?人类和人工智能生成笑话的比较。
PLoS One. 2024 Jul 3;19(7):e0305364. doi: 10.1371/journal.pone.0305364. eCollection 2024.
8
AI-generated faces influence gender stereotypes and racial homogenization.人工智能生成的面孔会影响性别刻板印象和种族同质化。
Sci Rep. 2025 Apr 25;15(1):14449. doi: 10.1038/s41598-025-99623-3.
9
A Road Map of Prompt Engineering for ChatGPT in Healthcare: A Perspective Study.ChatGPT 在医疗保健领域中的Prompt 工程路线图:观点研究。
Stud Health Technol Inform. 2024 Aug 22;316:998-1002. doi: 10.3233/SHTI240578.
10
Gender and ethnicity bias in generative artificial intelligence text-to-image depiction of pharmacists.生成式人工智能文本到图像描述药剂师中的性别和种族偏见。
Int J Pharm Pract. 2024 Nov 14;32(6):524-531. doi: 10.1093/ijpp/riae049.

本文引用的文献

1
AI image generators often give racist and sexist results: can they be fixed?人工智能图像生成器常常产生带有种族主义和性别歧视的结果:它们能被修复吗?
Nature. 2024 Mar;627(8005):722-725. doi: 10.1038/d41586-024-00674-9.
2
Assessment of the bias of artificial intelligence generated images and large language models on their depiction of a surgeon.评估人工智能生成图像和大语言模型在描绘外科医生方面的偏差。
ANZ J Surg. 2024 Mar;94(3):287-294. doi: 10.1111/ans.18792. Epub 2023 Dec 13.
3
Machine Learning as a Model for Cultural Learning: Teaching an Algorithm What it Means to be Fat.
机器学习作为文化学习的一种模式:教会算法“肥胖”意味着什么。
Sociol Methods Res. 2022 Nov;51(4):1484-1539. doi: 10.1177/00491241221122603. Epub 2022 Dec 2.
4
Identifying and predicting stereotype change in large language corpora: 72 groups, 115 years (1900-2015), and four text sources.识别和预测大语言语料库中的刻板印象变化:72个群体、115年(1900 - 2015年)以及四个文本来源。
J Pers Soc Psychol. 2023 Nov;125(5):969-990. doi: 10.1037/pspa0000354. Epub 2023 Aug 24.
5
Minority salience and the overestimation of individuals from minority groups in perception and memory.少数群体突显与对少数群体个体的感知和记忆高估。
Proc Natl Acad Sci U S A. 2022 Mar 22;119(12):e2116884119. doi: 10.1073/pnas.2116884119. Epub 2022 Mar 14.
6
Weight stigma, policy initiatives, and harnessing social media to elevate activism.体重歧视、政策举措以及利用社交媒体提升行动主义。
Body Image. 2022 Mar;40:131-137. doi: 10.1016/j.bodyim.2021.12.008. Epub 2021 Dec 23.
7
Questioning Racial and Gender Bias in AI-based Recommendations: Do Espoused National Cultural Values Matter?质疑基于人工智能的推荐系统中的种族和性别偏见:所信奉的国家文化价值观重要吗?
Inf Syst Front. 2022;24(5):1465-1481. doi: 10.1007/s10796-021-10156-2. Epub 2021 Jun 20.
8
Masculine defaults: Identifying and mitigating hidden cultural biases.男性化的偏见:识别和减轻隐藏的文化偏见。
Psychol Rev. 2020 Nov;127(6):1022-1052. doi: 10.1037/rev0000209. Epub 2020 Aug 17.
9
An inconvenienced youth? Ageism and its potential intergenerational roots.一个不便的年轻人?年龄歧视及其潜在的代际根源。
Psychol Bull. 2012 Sep;138(5):982-97. doi: 10.1037/a0027843. Epub 2012 Mar 26.
10
Who's funny: gender stereotypes, humor production, and memory bias.谁更幽默:性别刻板印象、幽默创作与记忆偏差
Psychon Bull Rev. 2012 Feb;19(1):108-12. doi: 10.3758/s13423-011-0161-2.