• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

相似文献

1
Large language models and their big bullshit potential.大型语言模型及其巨大的吹牛潜力。
Ethics Inf Technol. 2024;26(4):67. doi: 10.1007/s10676-024-09802-5. Epub 2024 Oct 4.
2
Bullshit can be harmful to your health: Bullibility as a precursor to poor decision--making.废话可能有害健康:易轻信他人是决策失误的先兆。
Curr Opin Psychol. 2024 Feb;55:101769. doi: 10.1016/j.copsyc.2023.101769. Epub 2023 Nov 23.
3
'You can't bullshit a bullshitter' (or can you?): Bullshitting frequency predicts receptivity to various types of misleading information.“你骗不了骗子”(或者你能吗?):说谎频率能预测对各类误导性信息的接受程度。
Br J Soc Psychol. 2021 Oct;60(4):1484-1505. doi: 10.1111/bjso.12447. Epub 2021 Feb 4.
4
Epistemically exploitative bullshit: A Sartrean account.认知剥削性的废话:一种萨特式的阐释。
Eur J Philos. 2023 Sep;31(3):711-730. doi: 10.1111/ejop.12810. Epub 2022 Jul 3.
5
Bullshitting and persuasion: The persuasiveness of a disregard for the truth.扯淡与说服:无视真相的说服力。
Br J Soc Psychol. 2021 Oct;60(4):1464-1483. doi: 10.1111/bjso.12453. Epub 2021 Feb 16.
6
Bullshit Ability as an Honest Signal of Intelligence.胡说能力是智慧的诚实信号。
Evol Psychol. 2021 Apr-Jun;19(2):14747049211000317. doi: 10.1177/14747049211000317.
7
It is double pleasure to deceive the deceiver: Machiavellianism is associated with producing but not necessarily with falling for bullshit.欺骗骗子是双重的乐趣:马基雅维利主义与制造谎言有关,但不一定与被 bullshit 所骗有关。 (注:bullshit 直译为“废话”“胡说八道”等,这里保留英文未译,因为在特定语境下可能有其特定含义且不适合直接翻译)
Br J Soc Psychol. 2023 Jan;62(1):467-485. doi: 10.1111/bjso.12559. Epub 2022 Jul 8.
8
This Place Is Full of It: Towards an Organizational Bullshit Perception Scale.《此地尽是胡言乱语:迈向组织废话感知量表》
Psychol Rep. 2022 Feb;125(1):448-463. doi: 10.1177/0033294120978162. Epub 2020 Dec 3.
9
If the real world were irrelevant, so to speak: The role of propositional truth-value in counterfactual sentence comprehension.如果现实世界可以说是不相关的:命题真值在反事实句子理解中的作用。
Cognition. 2012 Jan;122(1):102-9. doi: 10.1016/j.cognition.2011.09.001. Epub 2011 Oct 1.
10
Misperceiving Bullshit as Profound Is Associated with Favorable Views of Cruz, Rubio, Trump and Conservatism.将废话误认作深刻观点与对克鲁兹、卢比奥、特朗普及保守主义的好感有关。
PLoS One. 2016 Apr 29;11(4):e0153419. doi: 10.1371/journal.pone.0153419. eCollection 2016.

本文引用的文献

1
Detecting hallucinations in large language models using semantic entropy.使用语义熵检测大型语言模型中的幻觉。
Nature. 2024 Jun;630(8017):625-630. doi: 10.1038/s41586-024-07421-0. Epub 2024 Jun 19.
2
'Fighting fire with fire' - using LLMs to combat LLM hallucinations.以火攻火——使用大语言模型对抗大语言模型幻觉
Nature. 2024 Jun;630(8017):569-570. doi: 10.1038/d41586-024-01641-0.
3
Role play with large language models.角色扮演与大型语言模型。
Nature. 2023 Nov;623(7987):493-498. doi: 10.1038/s41586-023-06647-8. Epub 2023 Nov 8.
4
ChatGPT: these are not hallucinations - they're fabrications and falsifications.ChatGPT:这些不是幻觉——它们是编造和伪造。
Schizophrenia (Heidelb). 2023 Aug 19;9(1):52. doi: 10.1038/s41537-023-00379-4.
5
Artificial Hallucinations in ChatGPT: Implications in Scientific Writing.ChatGPT中的人工幻觉:对科学写作的影响
Cureus. 2023 Feb 19;15(2):e35179. doi: 10.7759/cureus.35179. eCollection 2023 Feb.

大型语言模型及其巨大的吹牛潜力。

Large language models and their big bullshit potential.

作者信息

Fisher Sarah A

机构信息

School of English, Communication and Philosophy, Cardiff University, Cardiff, UK.

出版信息

Ethics Inf Technol. 2024;26(4):67. doi: 10.1007/s10676-024-09802-5. Epub 2024 Oct 4.

DOI:10.1007/s10676-024-09802-5
PMID:39372727
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC11452423/
Abstract

Newly powerful large language models have burst onto the scene, with applications across a wide range of functions. We can now expect to encounter their outputs at rapidly increasing volumes and frequencies. Some commentators claim that large language models are , generating convincing output without regard for the truth. If correct, that would make large language models distinctively dangerous discourse participants. Bullshitters not only undermine the norm of truthfulness (by saying false things) but the normative status of truth itself (by treating it as entirely irrelevant). So, do large language models really bullshit? I argue that they can, in the sense of issuing propositional content in response to fact-seeking prompts, without having first assessed that content for truth or falsity. However, I further argue that they bullshit, given appropriate guardrails. So, just as with human speakers, the propensity for a large language model to bullshit depends on its own particular make-up.

摘要

新近强大的大语言模型已突然出现在人们视野中,其应用涵盖广泛的功能。我们现在可以预期会越来越频繁地大量接触到它们的输出结果。一些评论家声称,大语言模型在不顾及真相的情况下生成令人信服的输出。如果此言属实,那将使大语言模型成为极其危险的话语参与者。胡说八道者不仅破坏了真实性规范(通过说出虚假之事),还破坏了真理本身的规范地位(通过将其视为完全无关紧要)。那么,大语言模型真的会胡说八道吗?我认为它们能够做到,即在回应寻求事实的提示时发布命题内容,而无需事先评估该内容的真假。然而,我进一步认为,在有适当防护措施的情况下,它们不会胡说八道。所以,就像人类说话者一样,大语言模型胡说八道的倾向取决于其自身的特定构成。