Suppr超能文献

大型语言模型与错误信息的制度化

Large language models (LLMs) and the institutionalization of misinformation.

作者信息

Garry Maryanne, Chan Way Ming, Foster Jeffrey, Henkel Linda A

机构信息

Psychology, The University of Waikato, Hamilton, New Zealand.

Psychology, The University of Waikato, Hamilton, New Zealand.

出版信息

Trends Cogn Sci. 2024 Dec;28(12):1078-1088. doi: 10.1016/j.tics.2024.08.007. Epub 2024 Oct 10.

Abstract

Large language models (LLMs), such as ChatGPT, flood the Internet with true and false information, crafted and delivered with techniques that psychological science suggests will encourage people to think that information is true. What's more, as people feed this misinformation back into the Internet, emerging LLMs will adopt it and feed it back in other models. Such a scenario means we could lose access to information that helps us tell what is real from unreal - to do 'reality monitoring.' If that happens, misinformation will be the new foundation we use to plan, to make decisions, and to vote. We will lose trust in our institutions and each other.

摘要

诸如ChatGPT这样的大语言模型在互联网上充斥着真假信息,这些信息通过心理学研究表明会促使人们认为信息是真实的技术来精心炮制和传播。此外,当人们将这些错误信息反馈回互联网时,新兴的大语言模型会采纳它并在其他模型中再次传播。这样的情况意味着我们可能会失去有助于我们辨别真实与虚幻的信息——即进行“现实监测”的信息。如果这种情况发生,错误信息将成为我们用于规划、决策和投票的新基础。我们将对我们的机构和彼此失去信任。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验