Human-Centered Computing, Clemson University, Clemson, SC, USA.
Department of Psychology, Clemson University, Clemson, SC, USA.
Hum Factors. 2024 Apr;66(4):1037-1055. doi: 10.1177/00187208221116952. Epub 2022 Aug 6.
Determining the efficacy of two trust repair strategies (apology and denial) for trust violations of an ethical nature by an autonomous teammate.
While ethics in human-AI interaction is extensively studied, little research has investigated how decisions with ethical implications impact trust and performance within human-AI teams and their subsequent repair.
Forty teams of two participants and one autonomous teammate completed three team missions within a synthetic task environment. The autonomous teammate made an ethical or unethical action during each mission, followed by an apology or denial. Measures of individual team trust, autonomous teammate trust, human teammate trust, perceived autonomous teammate ethicality, and team performance were taken.
Teams with unethical autonomous teammates had significantly lower trust in the team and trust in the autonomous teammate. Unethical autonomous teammates were also perceived as substantially more unethical. Neither trust repair strategy effectively restored trust after an ethical violation, and autonomous teammate ethicality was not related to the team score, but unethical autonomous teammates did have shorter times.
Ethical violations significantly harm trust in the overall team and autonomous teammate but do not negatively impact team score. However, current trust repair strategies like apologies and denials appear ineffective in restoring trust after this type of violation.
This research highlights the need to develop trust repair strategies specific to human-AI teams and trust violations of an ethical nature.
确定两种信任修复策略(道歉和否认)对于自主队友违反道德性质的信任的有效性。
尽管人类与 AI 的交互中的道德规范得到了广泛研究,但很少有研究调查具有道德影响的决策如何影响人类与 AI 团队内的信任和绩效,以及随后的修复。
四十个由两名参与者和一个自主队友组成的团队在一个综合任务环境中完成了三个团队任务。自主队友在每个任务中都会采取道德或不道德的行动,然后是道歉或否认。对个人团队信任、自主队友信任、人类队友信任、感知自主队友道德性和团队绩效进行了测量。
与不道德的自主队友一起的团队对团队和自主队友的信任度明显降低。不道德的自主队友也被认为更不道德。两种信任修复策略都不能有效地在违反道德规范后恢复信任,而且自主队友的道德性与团队得分无关,但不道德的自主队友的时间确实更短。
道德违规行为严重损害了对整个团队和自主队友的信任,但不会对团队得分产生负面影响。然而,当前的信任修复策略,如道歉和否认,在这种类型的违规后似乎无法恢复信任。
这项研究强调需要针对人类与 AI 团队和道德性质的信任违规行为开发特定的信任修复策略。