Suppr超能文献

人类如何降低自动化欺骗检测性能。

How humans impair automated deception detection performance.

机构信息

Department of Methodology and Statistics, Tilburg University, The Netherlands; Department of Security and Crime Science, University College London, UK.

Department of Psychology, University of Amsterdam, The Netherlands.

出版信息

Acta Psychol (Amst). 2021 Feb;213:103250. doi: 10.1016/j.actpsy.2020.103250. Epub 2021 Jan 13.

Abstract

BACKGROUND

Deception detection is a prevalent problem for security practitioners. With a need for more large-scale approaches, automated methods using machine learning have gained traction. However, detection performance still implies considerable error rates. Findings from different domains suggest that hybrid human-machine integrations could offer a viable path in detection tasks.

METHOD

We collected a corpus of truthful and deceptive answers about participants' autobiographical intentions (n = 1640) and tested whether a combination of supervised machine learning and human judgment could improve deception detection accuracy. Human judges were presented with the outcome of the automated credibility judgment of truthful or deceptive statements. They could either fully overrule it (hybrid-overrule condition) or adjust it within a given boundary (hybrid-adjust condition).

RESULTS

The data suggest that in neither of the hybrid conditions did the human judgment add a meaningful contribution. Machine learning in isolation identified truth-tellers and liars with an overall accuracy of 69%. Human involvement through hybrid-overrule decisions brought the accuracy back to chance level. The hybrid-adjust condition did not improve deception detection performance. The decision-making strategies of humans suggest that the truth bias - the tendency to assume the other is telling the truth - could explain the detrimental effect.

CONCLUSIONS

The current study does not support the notion that humans can meaningfully add the deception detection performance of a machine learning system. All data are available at https://osf.io/45z7e/.

摘要

背景

欺骗检测是安全从业者普遍面临的问题。由于需要更多的大规模方法,使用机器学习的自动化方法已经引起了关注。然而,检测性能仍然意味着相当高的错误率。来自不同领域的研究结果表明,人机混合集成在检测任务中可能是一种可行的方法。

方法

我们收集了一组关于参与者自传意图的真实和欺骗性回答的语料库(n=1640),并测试了监督机器学习和人类判断的结合是否可以提高欺骗检测的准确性。人类判断者会看到自动可信度判断为真实或欺骗性陈述的结果。他们可以完全推翻它(混合推翻条件),也可以在给定的范围内调整它(混合调整条件)。

结果

数据表明,在任何一种混合条件下,人类判断都没有做出有意义的贡献。孤立的机器学习可以以 69%的整体准确性识别出真实的和说谎的人。通过混合推翻决策引入人类参与,准确性又回到了随机水平。混合调整条件并不能提高欺骗检测性能。人类的决策策略表明,真相偏见——即倾向于假设对方在说实话——可能解释了这种有害的影响。

结论

本研究不支持人类可以显著提高机器学习系统的欺骗检测性能的观点。所有数据都可在 https://osf.io/45z7e/ 上获取。

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验