Sharan Navya Nishith, Romano Daniela Maria
Department of Psychology, University College London, London, WC1E 6BT, UK.
Department of Information Science, University College London, London, WC1E 6BT, UK.
Heliyon. 2020 Aug 28;6(8):e04572. doi: 10.1016/j.heliyon.2020.e04572. eCollection 2020 Aug.
We are increasingly exposed to applications that embed some sort of artificial intelligence (AI) algorithm, and there is a general belief that people trust any AI-based product or service without question. This study investigated the effect of personality characteristics (Big Five Inventory (BFI) traits and locus of control (LOC)) on trust behaviour, and the extent to which people trust the advice from an AI-based algorithm, more than humans, in a decision-making card game.
One hundred and seventy-one adult volunteers decided whether the final covered card, in a five-card sequence over ten trials, had a higher/lower number than the second-to-last card. They either received no suggestion (control), recommendations from what they were told were previous participants (humans), or an AI-based algorithm (AI). Trust behaviour was measured as response time and concordance (number of participants' responses that were the same as the suggestion), and trust beliefs were measured as self-reported trust ratings.
It was found that LOC influences trust concordance and trust ratings, which are correlated. In particular, LOC negatively predicted beyond the BFI dimensions trust concordance. As LOC levels increased, people were less likely to follow suggestions from both humans or AI. Neuroticism negatively predicted trust ratings. Openness predicted reaction time, but only for suggestions from previous participants. However, people chose the AI suggestions more than those from humans, and self-reported that they believed such recommendations more.
The results indicate that LOC accounts for a significant variance for trust concordance and trust ratings, predicting beyond BFI traits, and affects the way people select whom they trust whether humans or AI. These findings also support the AI-based algorithm appreciation.
我们越来越多地接触到嵌入某种人工智能(AI)算法的应用程序,并且人们普遍认为,人们会毫无质疑地信任任何基于AI的产品或服务。本研究调查了人格特质(大五人格量表(BFI)特质和控制点(LOC))对信任行为的影响,以及在一场决策纸牌游戏中,人们在多大程度上更信任基于AI的算法给出的建议,而非人类给出的建议。
171名成年志愿者要在十轮试验中,判断五张纸牌序列中最后一张被盖住的牌的数字是否高于/低于倒数第二张牌。他们要么没有收到任何建议(对照组),要么收到他们被告知是之前参与者(人类)给出的建议,要么收到基于AI的算法(AI)给出的建议。信任行为通过反应时间和一致性(参与者与建议相同的反应数量)来衡量,信任信念通过自我报告的信任评级来衡量。
研究发现,控制点会影响信任一致性和信任评级,且二者存在相关性。具体而言,控制点在大五人格维度之外对信任一致性有负向预测作用。随着控制点水平的提高,人们越不可能听从人类或AI给出的建议。神经质对信任评级有负向预测作用。开放性预测了反应时间,但仅针对之前参与者给出的建议。然而,人们选择AI建议的次数多于人类建议,并且自我报告称他们更相信此类建议。
结果表明,控制点在信任一致性和信任评级方面解释了显著的方差变异,在大五人格特质之外具有预测作用,并影响人们选择信任对象(无论是人类还是AI)的方式。这些发现也支持了对基于AI的算法的赞赏。