Sungkyunkwan University, Seoul, South Korea.
Northeastern University, Boston, USA.
Sci Rep. 2024 Oct 29;14(1):25996. doi: 10.1038/s41598-024-76320-1.
As firms increasingly depend on artificial intelligence to evaluate people across various contexts (e.g., job interviews, performance reviews), research has explored the specific impact of algorithmic evaluations in the workplace. In particular, the extant body of work focuses on the possibility that employees may perceive biases from algorithmic evaluations. We show that although perceptions of biases are indeed a notable outcome of AI-driven assessments (vs. those performed by humans), a crucial risk inherent in algorithmic evaluations is that individuals perceive them as lacking respect and dignity. Specifically, we find that the effect of algorithmic (vs. human) evaluations on perceptions of disrespectful treatment (a) remains significant while controlling for perceived biases (but not vice versa), (b) is significant even when the effect on perceived biases is not, and (c) is larger in size than the effect on perceived biases. The effect of algorithmic evaluations on disrespectful treatment is explained by perceptions that individuals' detailed characteristics are not properly considered during the evaluation process conducted by AI.
随着企业越来越依赖人工智能在各种情境下评估人员(例如,面试、绩效评估),研究已经探讨了算法评估在工作场所中的具体影响。特别是,现有的研究工作集中在员工可能感知到算法评估存在偏见的可能性上。我们表明,尽管对偏见的感知确实是人工智能驱动评估(与人类执行的评估相比)的一个显著结果,但算法评估固有的一个关键风险是,个人认为它们缺乏尊重和尊严。具体来说,我们发现算法(与人类)评估对不尊重待遇的感知的影响(a)在控制感知偏见时仍然显著(但反之则不然),(b)即使对感知偏见的影响不显著时仍然显著,(c)比对感知偏见的影响更大。算法评估对不尊重待遇的影响可以通过以下观点来解释:在人工智能进行的评估过程中,个人的详细特征没有得到适当考虑。