Rieger Tobias, Kugler Luisa, Manzey Dietrich, Roesler Eileen
Technische Universität Berlin, Berlin, Germany.
George Mason University, Fairfax, VA, USA.
Hum Factors. 2024 Aug;66(8):1995-2007. doi: 10.1177/00187208231197347. Epub 2023 Aug 26.
This study's purpose was to better understand the dynamics of trust attitude and behavior in human-agent interaction.
Whereas past research provided evidence for a perfect automation schema, more recent research has provided contradictory evidence.
To disentangle these conflicting findings, we conducted an online experiment using a simulated medical X-ray task. We manipulated the framing of support agents (i.e., artificial intelligence (AI) versus expert versus novice) between-subjects and failure experience (i.e., perfect support, imperfect support, back-to-perfect support) within subjects. Trust attitude and behavior as well as perceived reliability served as dependent variables.
Trust attitude and perceived reliability were higher for the human expert than for the AI than for the human novice. Moreover, the results showed the typical pattern of trust formation, dissolution, and restoration for trust attitude and behavior as well as perceived reliability. Forgiveness after failure experience did not differ between agents.
The results strongly imply the existence of an imperfect automation schema. This illustrates the need to consider agent expertise for human-agent interaction.
When replacing human experts with AI as support agents, the challenge of lower trust attitude towards the novel agent might arise.
本研究旨在更好地理解人机交互中信任态度和行为的动态变化。
过去的研究为完美自动化模式提供了证据,而最近的研究则提供了相互矛盾的证据。
为了解决这些相互矛盾的发现,我们使用模拟医学X光任务进行了一项在线实验。我们在受试者之间操纵了支持代理的框架(即人工智能与专家与新手),并在受试者内部操纵了失败经历(即完美支持、不完美支持、恢复完美支持)。信任态度和行为以及感知可靠性作为因变量。
人类专家的信任态度和感知可靠性高于人工智能,高于人类新手。此外,结果显示了信任态度和行为以及感知可靠性的信任形成、消解和恢复的典型模式。失败经历后的宽恕在不同代理之间没有差异。
结果强烈暗示存在不完美自动化模式。这说明了在人机交互中考虑代理专业知识的必要性。
当用人工智能取代人类专家作为支持代理时,可能会出现对新代理信任态度较低的挑战。