Suppr超能文献

当聊天机器人言行一致时:探究对人工智能(AI)聊天机器人信任的基础。

When the bot walks the talk: Investigating the foundations of trust in an artificial intelligence (AI) chatbot.

作者信息

Lalot Fanny, Bertram Anna-Marie

机构信息

Faculty of Psychology, University of Basel.

出版信息

J Exp Psychol Gen. 2025 Feb;154(2):533-551. doi: 10.1037/xge0001696. Epub 2024 Dec 5.

Abstract

The concept of trust in artificial intelligence (AI) has been gaining increasing relevance for understanding and shaping human interaction with AI systems. Despite a growing literature, there are disputes as to whether the processes of trust in AI are similar to that of interpersonal trust (i.e., in fellow humans). The aim of the present article is twofold. First, we provide a systematic test of an integrative model of trust inspired by interpersonal trust research encompassing trust, its antecedents (trustworthiness and trust propensity), and its consequences (intentions to use the AI and willingness to disclose personal information). Second, we investigate the role of AI personalization on trust and trustworthiness, considering both their mean levels and their dynamic relationships. In two pilot studies ( = 313) and one main study ( = 1,001) focusing on AI chatbots, we find that the integrative model of trust is suitable for the study of trust in virtual AI. Perceived trustworthiness of the AI, and more specifically its ability and integrity dimensions, is a significant antecedent of trust and so are anthropomorphism and propensity to trust smart technology. Trust, in turn, leads to greater intentions to use and willingness to disclose information to the AI. The personalized AI chatbot was perceived as more able and benevolent than the impersonal chatbot. It was also more anthropomorphized and led to greater usage intentions, but not to greater trust. Anthropomorphism, not trust, explained the greater intentions to use personalized AI. We discuss implications for research on trust in humans and in automation. (PsycInfo Database Record (c) 2025 APA, all rights reserved).

摘要

人工智能(AI)信任概念对于理解和塑造人类与AI系统的互动愈发重要。尽管相关文献不断增加,但关于AI信任过程是否与人际信任(即对他人的信任)过程相似仍存在争议。本文旨在实现两个目标。首先,我们对一个受人际信任研究启发的信任综合模型进行系统测试,该模型涵盖信任、其前因(可信度和信任倾向)及其后果(使用AI的意图和披露个人信息的意愿)。其次,我们研究AI个性化在信任和可信度方面的作用,同时考虑它们的平均水平及其动态关系。在两项针对AI聊天机器人的预研究(N = 313)和一项主要研究(N = 1,001)中,我们发现信任综合模型适用于虚拟AI信任研究。AI的感知可信度,更具体地说是其能力和正直维度,是信任的重要前因,拟人化和对智能技术的信任倾向也是如此。反过来,信任会导致更高的使用意图和向AI披露信息的意愿。个性化AI聊天机器人比非个性化聊天机器人被认为更有能力且更友善。它也更具拟人化,导致更高的使用意图,但不会带来更高的信任度。是拟人化而非信任解释了对个性化AI更高的使用意图。我们讨论了对人类信任和自动化信任研究的启示。(PsycInfo数据库记录(c)2025美国心理学会,保留所有权利)

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验