Kubiak Emeric, Efremova Maria I, Baron Simon, Frasca Keely J
AssessFirst, Paris, France.
King's College London, Institute of Psychiatry, Psychology and Neuroscience, University of London, London, United Kingdom.
Front Psychol. 2023 Aug 15;14:1219865. doi: 10.3389/fpsyg.2023.1219865. eCollection 2023.
Gender biases in hiring decisions remain an issue in the workplace. Also, current gender balancing techniques are scientifically poorly supported and lead to undesirable results, sometimes even contributing to activating stereotypes. While hiring algorithms could bring a solution, they are still often regarded as tools amplifying human prejudices. In this sense, talent specialists tend to prefer recommendations from experts, while candidates question the fairness of such tools, in particular, due to a lack of information and control over the standardized assessment. However, there is evidence that building algorithms based on data that is gender-blind, like personality - which has been shown to be mostly similar between genders, and is also predictive of performance, could help in reducing gender biases in hiring. The goal of this study was, therefore, to test the adverse impact of a personality-based algorithm across a large array of occupations.
The study analyzed 208 predictive models designed for 18 employers. These models were tested on a global sample of 273,293 potential candidates for each respective role.
Mean weighted impact ratios of 0.91 (Female-Male) and 0.90 (Male-Female) were observed. We found similar results when analyzing impact ratios for 21 different job categories.
Our results suggest that personality-based algorithms could help organizations screen candidates in the early stages of the selection process while mitigating the risks of gender discrimination.
招聘决策中的性别偏见在工作场所仍然是一个问题。此外,当前的性别平衡技术在科学上缺乏有力支持,会导致不理想的结果,有时甚至会助长刻板印象的形成。虽然招聘算法可能提供一种解决方案,但它们仍常常被视为放大人类偏见的工具。从这个意义上说,人才专家倾向于青睐专家的推荐,而求职者则质疑此类工具的公平性,特别是由于缺乏信息以及对标准化评估缺乏控制权。然而,有证据表明,基于无性别偏见的数据(如个性,研究表明不同性别之间的个性大多相似,且个性也能预测工作表现)构建算法,有助于减少招聘中的性别偏见。因此,本研究的目的是在大量职业中测试基于个性的算法的不利影响。
该研究分析了为18家雇主设计的208个预测模型。这些模型在全球范围内针对每个相应职位的273,293名潜在候选人样本上进行了测试。
观察到平均加权影响比率为0.91(女性-男性)和0.90(男性-女性)。在分析21个不同工作类别的影响比率时,我们发现了类似的结果。
我们的结果表明,基于个性的算法可以帮助组织在选拔过程的早期阶段筛选候选人,同时降低性别歧视的风险。