Marketing Discipline Group, University of Technology Sydney.
Department of Managerial Studies, University of Illinois at Chicago.
Psychol Sci. 2020 Apr;31(4):363-380. doi: 10.1177/0956797620904985. Epub 2020 Mar 30.
Although more individuals are relying on information provided by nonhuman agents, such as artificial intelligence and robots, little research has examined how persuasion attempts made by nonhuman agents might differ from persuasion attempts made by human agents. Drawing on construal-level theory, we posited that individuals would perceive artificial agents at a low level of construal because of the agents' lack of autonomous goals and intentions, which directs individuals' focus toward these agents implement actions to serve humans rather than they do so. Across multiple studies (total = 1,668), we showed that these construal-based differences affect compliance with persuasive messages made by artificial agents. These messages are more appropriate and effective when the message represents low-level as opposed to high-level construal features. These effects were moderated by the extent to which an artificial agent could independently learn from its environment, given that learning defies people's lay theories about artificial agents.
尽管越来越多的人依赖于非人类代理(如人工智能和机器人)提供的信息,但很少有研究探讨非人类代理的说服尝试与人类代理的说服尝试有何不同。本研究基于构念水平理论,提出由于代理缺乏自主目标和意图,人们会将人工智能代理视为低构念水平的代理,这使得人们将注意力集中在代理实施的行为是为了服务人类,而不是他们的行为本身。通过多项研究(总计 1668 人),我们表明这些基于构念的差异会影响对人工智能代理发出的说服信息的遵从性。当信息代表低构念特征而不是高构念特征时,这些信息更加恰当和有效。这些影响受到人工智能代理在多大程度上可以独立地从环境中学习的调节,因为学习违背了人们对人工智能代理的常识理论。