Suppr超能文献

对话式呈现模式提高了使用 ChatGPT 进行信息搜索时的可信度判断。

Conversational presentation mode increases credibility judgements during information search with ChatGPT.

机构信息

Leibniz-Institut Für Wissensmedien (IWM), Schleichstraße 6, 72076, Tübingen, Germany.

University of Amsterdam, Amsterdam, The Netherlands.

出版信息

Sci Rep. 2024 Jul 25;14(1):17127. doi: 10.1038/s41598-024-67829-6.

Abstract

People increasingly use large language model (LLM)-based conversational agents to obtain information. However, the information these models provide is not always factually accurate. Thus, it is critical to understand what helps users adequately assess the credibility of the provided information. Here, we report the results of two preregistered experiments in which participants rated the credibility of accurate versus partially inaccurate information ostensibly provided by a dynamic text-based LLM-powered agent, a voice-based agent, or a static text-based online encyclopedia. We found that people were better at detecting inaccuracies when identical information was provided as static text compared to both types of conversational agents, regardless of whether information search applications were branded (ChatGPT, Alexa, and Wikipedia) or unbranded. Mediation analysis overall corroborated the interpretation that a conversational nature poses a threat to adequate credibility judgments. Our research highlights the importance of presentation mode when dealing with misinformation.

摘要

人们越来越多地使用基于大型语言模型(LLM)的对话代理来获取信息。然而,这些模型提供的信息并不总是在事实上准确的。因此,了解哪些因素有助于用户充分评估所提供信息的可信度至关重要。在这里,我们报告了两项预先注册的实验结果,参与者对看似由动态基于文本的 LLM 驱动的代理、基于语音的代理或静态基于文本的在线百科全书提供的准确信息与部分不准确信息的可信度进行了评级。我们发现,当相同的信息以静态文本的形式呈现时,人们更善于发现不准确之处,而与两种类型的对话代理相比,无论信息搜索应用程序是否具有品牌(ChatGPT、Alexa 和 Wikipedia)或无品牌。总体而言,中介分析证实了这样一种解释,即对话性质对充分的可信度判断构成了威胁。我们的研究强调了在处理错误信息时呈现模式的重要性。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4cec/11272919/75d98913392b/41598_2024_67829_Fig1_HTML.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验