Xu Min, Wang Yiwen
School of Economics and Management, Fuzhou University, Fuzhou, China.
School of Business Administration, Zhejiang Gongshang University, Hangzhou, China.
Br J Psychol. 2024 Oct 21. doi: 10.1111/bjop.12740.
Even though artificial intelligence (AI)-based systems typically outperform human decision-makers, they are not immune to errors, leading users to lose trust in them and be less likely to use them again-a phenomenon known as algorithm aversion. The purpose of the present research was to investigate whether explainable AI (XAI) could function as a viable strategy to counter algorithm aversion. We conducted two experiments to examine how XAI influences users' willingness to continue using AI-based systems when these systems exhibit errors. The results showed that, following the observation of algorithms erring, the inclination of users to delegate decisions to or follow advice from intelligent agents significantly decreased compared to the period before the errors were revealed. However, the explainability effectively mitigated this decline, with users in the XAI condition being more likely to continue utilizing intelligent agents for subsequent tasks after seeing algorithms erring than those in the non-XAI condition. We further found that the explainability could reduce users' decision regret, and the decrease in decision regret mediated the relationship between the explainability and re-use behaviour. These findings underscore the adaptive function of XAI in alleviating negative user experiences and maintaining user trust in the context of imperfect AI.
尽管基于人工智能(AI)的系统通常比人类决策者表现更出色,但它们并非不会出错,这导致用户对其失去信任,再次使用它们的可能性降低——这种现象被称为算法厌恶。本研究的目的是调查可解释人工智能(XAI)是否可以作为应对算法厌恶的可行策略。我们进行了两项实验,以研究当基于AI的系统出现错误时,XAI如何影响用户继续使用这些系统的意愿。结果表明,在观察到算法出错后,与错误暴露之前相比,用户将决策委托给智能代理或听从其建议的倾向显著降低。然而,可解释性有效地减轻了这种下降,与非XAI条件下的用户相比,处于XAI条件下的用户在看到算法出错后更有可能在后续任务中继续使用智能代理。我们进一步发现,可解释性可以减少用户的决策后悔,并且决策后悔的减少介导了可解释性与再使用行为之间的关系。这些发现强调了XAI在减轻不完美AI背景下的负面用户体验和维持用户信任方面的适应性功能。