• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

我应该信任人工智能进行招聘吗?招聘人员在简历筛选过程中面对基于算法的推荐系统时的认知与行为。

Should I Trust the Artificial Intelligence to Recruit? Recruiters' Perceptions and Behavior When Faced With Algorithm-Based Recommendation Systems During Resume Screening.

作者信息

Lacroux Alain, Martin-Lacroux Christelle

机构信息

Univ. Polytechnique Hauts de France, IDH, CRISS, Valenciennes, France.

Univ. Grenoble Alpes, Grenoble INP, CERAG, Grenoble, France.

出版信息

Front Psychol. 2022 Jul 6;13:895997. doi: 10.3389/fpsyg.2022.895997. eCollection 2022.

DOI:10.3389/fpsyg.2022.895997
PMID:35874355
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC9298741/
Abstract

Resume screening assisted by decision support systems that incorporate artificial intelligence is currently undergoing a strong development in many organizations, raising technical, managerial, legal, and ethical issues. The purpose of the present paper is to better understand the reactions of recruiters when they are offered algorithm-based recommendations during resume screening. Two polarized attitudes have been identified in the literature on users' reactions to algorithm-based recommendations: algorithm aversion, which reflects a general distrust and preference for human recommendations; and automation bias, which corresponds to an overconfidence in the decisions or recommendations made by algorithmic decision support systems (ADSS). Drawing on results obtained in the field of automated decision support areas, we make the general hypothesis that recruiters trust human experts more than ADSS, because they distrust algorithms for subjective decisions such as recruitment. An experiment on resume screening was conducted on a sample of professionals ( = 694) involved in the screening of job applications. They were asked to study a job offer, then evaluate two fictitious resumes in a 2 × 2 factorial design with manipulation of the type of recommendation (no recommendation/algorithmic recommendation/human expert recommendation) and of the consistency of the recommendations (consistent vs. inconsistent recommendation). Our results support the general hypothesis of preference for human recommendations: recruiters exhibit a higher level of trust toward human expert recommendations compared with algorithmic recommendations. However, we also found that recommendation's consistence has a differential and unexpected impact on decisions: in the presence of an inconsistent algorithmic recommendation, recruiters favored the unsuitable over the suitable resume. Our results also show that specific personality traits (extraversion, neuroticism, and self-confidence) are associated with a differential use of algorithmic recommendations. Implications for research and HR policies are finally discussed.

摘要

在许多组织中,由包含人工智能的决策支持系统辅助进行简历筛选目前正蓬勃发展,这引发了技术、管理、法律和伦理等方面的问题。本文的目的是更好地理解招聘人员在简历筛选过程中收到基于算法的推荐时的反应。在关于用户对基于算法的推荐的反应的文献中,已经确定了两种极端的态度:算法厌恶,它反映了对人类推荐的普遍不信任和偏好;以及自动化偏差,它对应于对算法决策支持系统(ADSS)做出的决策或推荐的过度自信。借鉴在自动决策支持领域获得的结果,我们提出一个总体假设,即招聘人员比ADSS更信任人类专家,因为他们不信任算法用于诸如招聘等主观决策。我们对694名参与求职申请筛选的专业人员样本进行了一项简历筛选实验。要求他们研究一份招聘启事,然后在一个2×2析因设计中评估两份虚拟简历,其中涉及推荐类型(无推荐/算法推荐/人类专家推荐)和推荐一致性(一致推荐与不一致推荐)的操作。我们的结果支持了对人类推荐偏好的总体假设:与算法推荐相比,招聘人员对人类专家推荐表现出更高水平的信任。然而,我们还发现推荐的一致性对决策有不同寻常的影响:在存在不一致的算法推荐的情况下,招聘人员更喜欢不合适的简历而不是合适的简历。我们的结果还表明,特定的人格特质(外向性、神经质和自信)与算法推荐的不同使用方式相关。最后讨论了对研究和人力资源政策的影响。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ec62/9298741/8aaade230a2f/fpsyg-13-895997-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ec62/9298741/c824f1885bf2/fpsyg-13-895997-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ec62/9298741/bc37f08ac72b/fpsyg-13-895997-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ec62/9298741/8aaade230a2f/fpsyg-13-895997-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ec62/9298741/c824f1885bf2/fpsyg-13-895997-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ec62/9298741/bc37f08ac72b/fpsyg-13-895997-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ec62/9298741/8aaade230a2f/fpsyg-13-895997-g003.jpg

相似文献

1
Should I Trust the Artificial Intelligence to Recruit? Recruiters' Perceptions and Behavior When Faced With Algorithm-Based Recommendation Systems During Resume Screening.我应该信任人工智能进行招聘吗?招聘人员在简历筛选过程中面对基于算法的推荐系统时的认知与行为。
Front Psychol. 2022 Jul 6;13:895997. doi: 10.3389/fpsyg.2022.895997. eCollection 2022.
2
Michael is better than Mehmet: exploring the perils of algorithmic biases and selective adherence to advice from automated decision support systems in hiring.迈克尔比梅赫梅特更优秀:探究算法偏见的危害以及在招聘中对自动化决策支持系统建议的选择性遵循。
Front Psychol. 2024 Sep 10;15:1416504. doi: 10.3389/fpsyg.2024.1416504. eCollection 2024.
3
Collaboration among recruiters and artificial intelligence: removing human prejudices in employment.招聘人员与人工智能之间的合作:消除就业中的人为偏见。
Cogn Technol Work. 2023;25(1):135-149. doi: 10.1007/s10111-022-00716-0. Epub 2022 Sep 28.
4
What a difference your e-mail makes: effects of informal e-mail addresses in online résumé screening.你的邮件带来了多大的不同啊:在线简历筛选中非正式电子邮件地址的影响。
Cyberpsychol Behav Soc Netw. 2015 Mar;18(3):135-40. doi: 10.1089/cyber.2014.0542.
5
Artificial fairness? Trust in algorithmic police decision-making.人为的公平?对算法警务决策的信任。
J Exp Criminol. 2023;19(1):165-189. doi: 10.1007/s11292-021-09484-9. Epub 2021 Sep 12.
6
Clothing Design Style Recommendation Using Decision Tree Algorithm Combined with Deep Learning.基于决策树算法与深度学习融合的服装设计风格推荐
Comput Intell Neurosci. 2022 Aug 10;2022:5745457. doi: 10.1155/2022/5745457. eCollection 2022.
7
A Cogitation on the ChatGPT Craze from the Perspective of Psychological Algorithm Aversion and Appreciation.从心理算法厌恶与欣赏视角对ChatGPT热潮的思考
Psychol Res Behav Manag. 2023 Sep 13;16:3837-3844. doi: 10.2147/PRBM.S430936. eCollection 2023.
8
Perceptions of Justice By Algorithms.算法的正义认知
Artif Intell Law (Dordr). 2023;31(2):269-292. doi: 10.1007/s10506-022-09312-z. Epub 2022 Apr 5.
9
How Terminology Affects Users' Responses to System Failures.术语如何影响用户对系统故障的反应。
Hum Factors. 2024 Aug;66(8):2082-2103. doi: 10.1177/00187208231202572. Epub 2023 Sep 21.
10
Towards the design of user-centric strategy recommendation systems for collaborative Human-AI tasks.面向协作式人机任务的以用户为中心的策略推荐系统设计
Int J Hum Comput Stud. 2024 Apr;184. doi: 10.1016/j.ijhcs.2023.103216. Epub 2024 Jan 6.

引用本文的文献

1
Michael is better than Mehmet: exploring the perils of algorithmic biases and selective adherence to advice from automated decision support systems in hiring.迈克尔比梅赫梅特更优秀:探究算法偏见的危害以及在招聘中对自动化决策支持系统建议的选择性遵循。
Front Psychol. 2024 Sep 10;15:1416504. doi: 10.3389/fpsyg.2024.1416504. eCollection 2024.
2
Prevalence of bias against neurodivergence-related terms in artificial intelligence language models.人工智能语言模型中与神经多样性相关术语的偏见流行度。
Autism Res. 2024 Feb;17(2):234-248. doi: 10.1002/aur.3094. Epub 2024 Jan 29.
3
Humans inherit artificial intelligence biases.

本文引用的文献

1
Assessing Two Dimensions of Interpersonal Trust: Other-Focused Trust and Propensity to Trust.评估人际信任的两个维度:他人导向型信任和信任倾向。
Front Psychol. 2021 Jul 27;12:654735. doi: 10.3389/fpsyg.2021.654735. eCollection 2021.
2
Automated video interview personality assessments: Reliability, validity, and generalizability investigations.自动化视频面试人格评估:可靠性、有效性和可推广性研究。
J Appl Psychol. 2022 Aug;107(8):1323-1351. doi: 10.1037/apl0000695. Epub 2021 Jun 10.
3
People are averse to machines making moral decisions.
人类继承了人工智能偏差。
Sci Rep. 2023 Oct 3;13(1):15737. doi: 10.1038/s41598-023-42384-8.
4
Check the box! How to deal with automation bias in AI-based personnel selection.勾选方框!如何应对基于人工智能的人员选拔中的自动化偏差。
Front Psychol. 2023 Apr 5;14:1118723. doi: 10.3389/fpsyg.2023.1118723. eCollection 2023.
人们反对机器做出道德决策。
Cognition. 2018 Dec;181:21-34. doi: 10.1016/j.cognition.2018.08.003. Epub 2018 Aug 11.
4
Initial investigation into computer scoring of candidate essays for personnel selection.人员选拔中对候选文章进行计算机评分的初步研究。
J Appl Psychol. 2016 Jul;101(7):958-75. doi: 10.1037/apl0000108. Epub 2016 Apr 14.
5
A Meta-Analysis of Factors Influencing the Development of Trust in Automation: Implications for Understanding Autonomy in Future Systems.影响自动化信任发展因素的元分析:对理解未来系统自主性的启示
Hum Factors. 2016 May;58(3):377-400. doi: 10.1177/0018720816634228. Epub 2016 Mar 22.
6
Trust in automation: integrating empirical evidence on factors that influence trust.对自动化的信任:整合关于影响信任因素的实证证据。
Hum Factors. 2015 May;57(3):407-34. doi: 10.1177/0018720814547570. Epub 2014 Sep 2.
7
Understanding reliance on automation: effects of error type, error distribution, age and experience.理解对自动化的依赖:错误类型、错误分布、年龄和经验的影响。
Theor Issues Ergon Sci. 2014 Mar;15(2):134-160. doi: 10.1080/1463922X.2011.611269.
8
Algorithm aversion: people erroneously avoid algorithms after seeing them err.算法厌恶:人们在看到算法出错后会错误地避免使用算法。
J Exp Psychol Gen. 2015 Feb;144(1):114-26. doi: 10.1037/xge0000033. Epub 2014 Nov 17.
9
Mechanical versus clinical data combination in selection and admissions decisions: a meta-analysis.机械数据与临床数据联合用于选择和入院决策:一项荟萃分析。
J Appl Psychol. 2013 Nov;98(6):1060-72. doi: 10.1037/a0034156. Epub 2013 Sep 16.
10
Individual differences in response to automation: the five factor model of personality.个体对自动化的反应差异:人格五因素模型。
J Exp Psychol Appl. 2011 Jun;17(2):71-96. doi: 10.1037/a0024170.