Naiseh Mohammad, Al-Thani Dena, Jiang Nan, Ali Raian
Faculty of Science and Technology, Bournemouth University, Fern Barrow, Poole, BH12 5BB UK.
College of Science and Engineering, Hamad Bin Khalifa University, Doha, Qatar.
World Wide Web. 2021;24(5):1857-1884. doi: 10.1007/s11280-021-00916-0. Epub 2021 Aug 2.
Human-AI collaborative decision-making tools are being increasingly applied in critical domains such as healthcare. However, these tools are often seen as closed and intransparent for human decision-makers. An essential requirement for their success is the ability to provide explanations about themselves that are understandable and meaningful to the users. While explanations generally have positive connotations, studies showed that the assumption behind users interacting and engaging with these explanations could introduce trust calibration errors such as facilitating irrational or less thoughtful agreement or disagreement with the AI recommendation. In this paper, we explore how to help trust calibration through explanation interaction design. Our research method included two main phases. We first conducted a think-aloud study with 16 participants aiming to reveal main trust calibration errors concerning explainability in AI-Human collaborative decision-making tools. Then, we conducted two co-design sessions with eight participants to identify design principles and techniques for explanations that help trust calibration. As a conclusion of our research, we provide five design principles: Design for engagement, challenging habitual actions, attention guidance, friction and support training and learning. Our findings are meant to pave the way towards a more integrated framework for designing explanations with trust calibration as a primary goal.
人机协作决策工具正日益应用于医疗保健等关键领域。然而,这些工具对于人类决策者而言往往被视为封闭且不透明的。其成功的一个基本要求是能够向用户提供易于理解且有意义的关于自身的解释。虽然解释通常具有积极含义,但研究表明,用户与这些解释进行交互并参与其中背后的假设可能会引入信任校准错误,例如促使对人工智能建议达成非理性或欠思考的同意或不同意。在本文中,我们探讨如何通过解释交互设计来帮助进行信任校准。我们的研究方法包括两个主要阶段。我们首先对16名参与者进行了出声思考研究,旨在揭示关于人机协作决策工具中可解释性的主要信任校准错误。然后,我们与8名参与者进行了两次协同设计会议,以确定有助于信任校准的解释的设计原则和技术。作为我们研究的结论,我们提供了五条设计原则:为参与度设计、挑战习惯行为、注意力引导、摩擦与支持训练及学习。我们的研究结果旨在为以信任校准为首要目标设计解释的更综合框架铺平道路。