Department of Psychology, LMU Center for Leadership and People Management, LMU Munich, Munich, Germany.
Department of Business Psychology, Technical University of Applied Sciences Augsburg, Augsburg, Germany.
Sci Rep. 2024 Apr 28;14(1):9736. doi: 10.1038/s41598-024-60220-5.
Despite the rise of decision support systems enabled by artificial intelligence (AI) in personnel selection, their impact on decision-making processes is largely unknown. Consequently, we conducted five experiments (N = 1403 students and Human Resource Management (HRM) employees) investigating how people interact with AI-generated advice in a personnel selection task. In all pre-registered experiments, we presented correct and incorrect advice. In Experiments 1a and 1b, we manipulated the source of the advice (human vs. AI). In Experiments 2a, 2b, and 2c, we further manipulated the type of explainability of AI advice (2a and 2b: heatmaps and 2c: charts). We hypothesized that accurate and explainable advice improves decision-making. The independent variables were regressed on task performance, perceived advice quality and confidence ratings. The results consistently showed that incorrect advice negatively impacted performance, as people failed to dismiss it (i.e., overreliance). Additionally, we found that the effects of source and explainability of advice on the dependent variables were limited. The lack of reduction in participants' overreliance on inaccurate advice when the systems' predictions were made more explainable highlights the complexity of human-AI interaction and the need for regulation and quality standards in HRM.
尽管人工智能 (AI) 支持的决策支持系统在人员选拔中兴起,但它们对决策过程的影响在很大程度上仍是未知的。因此,我们进行了五项实验(N=1403 名学生和人力资源管理 (HRM) 员工),研究了人们在人员选拔任务中如何与 AI 生成的建议进行交互。在所有预先注册的实验中,我们都提供了正确和错误的建议。在实验 1a 和 1b 中,我们操纵了建议的来源(人为与 AI)。在实验 2a、2b 和 2c 中,我们进一步操纵了 AI 建议的可解释性类型(2a 和 2b:热图和 2c:图表)。我们假设准确和可解释的建议会提高决策质量。自变量被回归到任务表现、感知建议质量和信心评级上。结果一致表明,错误的建议会对表现产生负面影响,因为人们无法忽视它(即过度依赖)。此外,我们发现建议的来源和可解释性对因变量的影响有限。当系统的预测变得更具可解释性时,参与者对不准确建议的过度依赖并没有减少,这凸显了人机交互的复杂性,以及 HRM 中需要监管和质量标准。