Garry Maryanne, Chan Way Ming, Foster Jeffrey, Henkel Linda A
Psychology, The University of Waikato, Hamilton, New Zealand.
Psychology, The University of Waikato, Hamilton, New Zealand.
Trends Cogn Sci. 2024 Dec;28(12):1078-1088. doi: 10.1016/j.tics.2024.08.007. Epub 2024 Oct 10.
Large language models (LLMs), such as ChatGPT, flood the Internet with true and false information, crafted and delivered with techniques that psychological science suggests will encourage people to think that information is true. What's more, as people feed this misinformation back into the Internet, emerging LLMs will adopt it and feed it back in other models. Such a scenario means we could lose access to information that helps us tell what is real from unreal - to do 'reality monitoring.' If that happens, misinformation will be the new foundation we use to plan, to make decisions, and to vote. We will lose trust in our institutions and each other.
诸如ChatGPT这样的大语言模型在互联网上充斥着真假信息,这些信息通过心理学研究表明会促使人们认为信息是真实的技术来精心炮制和传播。此外,当人们将这些错误信息反馈回互联网时,新兴的大语言模型会采纳它并在其他模型中再次传播。这样的情况意味着我们可能会失去有助于我们辨别真实与虚幻的信息——即进行“现实监测”的信息。如果这种情况发生,错误信息将成为我们用于规划、决策和投票的新基础。我们将对我们的机构和彼此失去信任。