Suppr超能文献

人机交互过程中人工智能错误的影响。

The impact of AI errors in a human-in-the-loop process.

机构信息

Bikolabs/Biko, Pamplona, Spain.

Departamento de Psicología, Universidad de Deusto, Avda. Universidad 24, 48007, Bilbao, Spain.

出版信息

Cogn Res Princ Implic. 2024 Jan 7;9(1):1. doi: 10.1186/s41235-023-00529-3.

Abstract

Automated decision-making is becoming increasingly common in the public sector. As a result, political institutions recommend the presence of humans in these decision-making processes as a safeguard against potentially erroneous or biased algorithmic decisions. However, the scientific literature on human-in-the-loop performance is not conclusive about the benefits and risks of such human presence, nor does it clarify which aspects of this human-computer interaction may influence the final decision. In two experiments, we simulate an automated decision-making process in which participants judge multiple defendants in relation to various crimes, and we manipulate the time in which participants receive support from a supposed automated system with Artificial Intelligence (before or after they make their judgments). Our results show that human judgment is affected when participants receive incorrect algorithmic support, particularly when they receive it before providing their own judgment, resulting in reduced accuracy. The data and materials for these experiments are freely available at the Open Science Framework:  https://osf.io/b6p4z/ Experiment 2 was preregistered.

摘要

自动化决策在公共部门中越来越普遍。因此,政治机构建议在这些决策过程中有人的参与,以防止算法决策出现潜在的错误或偏差。然而,关于人机交互的科学文献并没有就这种人类存在的好处和风险得出明确的结论,也没有阐明哪些人机交互的方面可能会影响最终决策。在两项实验中,我们模拟了一个自动化决策过程,参与者根据各种犯罪对多名被告进行判断,我们操纵参与者从人工智能的自动化系统获得支持的时间(在他们做出判断之前或之后)。我们的结果表明,当参与者接收到错误的算法支持时,他们的判断会受到影响,特别是当他们在做出自己的判断之前接收到这种支持时,导致准确性降低。这些实验的数据和材料可以在开放科学框架中免费获得:https://osf.io/b6p4z/ 实验 2 已经预先注册。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/819b/10772030/b7a5628e6494/41235_2023_529_Fig1_HTML.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验