• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

可信度递增和递减的信任。

Trust with increasing and decreasing reliability.

机构信息

Dalhousie University, Canada.

出版信息

Hum Factors. 2024 Dec;66(12):2569-2589. doi: 10.1177/00187208241228636. Epub 2024 Mar 6.

DOI:10.1177/00187208241228636
PMID:38445652
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC11487872/
Abstract

OBJECTIVE

The primary purpose was to determine how trust changes over time when automation reliability increases or decreases. A secondary purpose was to determine how task-specific self-confidence is associated with trust and reliability level.

BACKGROUND

Both overtrust and undertrust can be detrimental to system performance; therefore, the temporal dynamics of trust with changing reliability level need to be explored.

METHOD

Two experiments used a dominant-color identification task, where automation provided a recommendation to users, with the reliability of the recommendation changing over 300 trials. In Experiment 1, two groups of participants interacted with the system: one group started with a 50% reliable system which increased to 100%, while the other used a system that decreased from 100% to 50%. Experiment 2 included a group where automation reliability increased from 70% to 100%.

RESULTS

Trust was initially high in the decreasing group and then declined as reliability level decreased; however, trust also declined in the 50% increasing reliability group. Furthermore, when user self-confidence increased, automation reliability had a greater influence on trust. In Experiment 2, the 70% increasing reliability group showed increased trust in the system.

CONCLUSION

Trust does not always track the reliability of automated systems; in particular, it is difficult for trust to recover once the user has interacted with a low reliability system.

APPLICATIONS

This study provides initial evidence into the dynamics of trust for automation that gets better over time suggesting that users should only start interacting with automation when it is sufficiently reliable.

摘要

目的

主要目的是确定当自动化可靠性增加或降低时,信任如何随时间变化。次要目的是确定特定于任务的自信如何与信任和可靠性水平相关联。

背景

过度信任和信任不足都会对系统性能造成损害;因此,需要探索可靠性水平变化时信任的时间动态。

方法

两个实验使用主色识别任务,自动化向用户提供建议,建议的可靠性在 300 次试验中变化。在实验 1 中,两组参与者与系统交互:一组从 50%可靠的系统开始,增加到 100%,而另一组则使用从 100%降低到 50%的系统。实验 2 包括一个自动化可靠性从 70%增加到 100%的组。

结果

在可靠性降低的组中,信任最初很高,然后随着可靠性水平的降低而下降;然而,在可靠性从 50%增加的组中,信任也下降了。此外,当用户自信增加时,自动化可靠性对信任的影响更大。在实验 2 中,可靠性从 70%增加的组对系统的信任度增加。

结论

信任并不总是跟踪自动化系统的可靠性;特别是,一旦用户与低可靠性系统交互,信任就很难恢复。

应用

本研究为自动化信任的动态提供了初步证据,表明随着时间的推移,信任会越来越好,这表明用户只有在自动化系统足够可靠时才应开始与之交互。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2f04/11487872/2a7979561650/10.1177_00187208241228636-fig8.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2f04/11487872/952dfaecc43e/10.1177_00187208241228636-fig1.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2f04/11487872/7fa6c08e62eb/10.1177_00187208241228636-fig2.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2f04/11487872/bb551506b4f5/10.1177_00187208241228636-fig3.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2f04/11487872/820bff9ad954/10.1177_00187208241228636-fig4.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2f04/11487872/b0070e7cba63/10.1177_00187208241228636-fig5.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2f04/11487872/8184aca56c2e/10.1177_00187208241228636-fig6.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2f04/11487872/755cd0e97f59/10.1177_00187208241228636-fig7.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2f04/11487872/2a7979561650/10.1177_00187208241228636-fig8.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2f04/11487872/952dfaecc43e/10.1177_00187208241228636-fig1.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2f04/11487872/7fa6c08e62eb/10.1177_00187208241228636-fig2.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2f04/11487872/bb551506b4f5/10.1177_00187208241228636-fig3.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2f04/11487872/820bff9ad954/10.1177_00187208241228636-fig4.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2f04/11487872/b0070e7cba63/10.1177_00187208241228636-fig5.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2f04/11487872/8184aca56c2e/10.1177_00187208241228636-fig6.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2f04/11487872/755cd0e97f59/10.1177_00187208241228636-fig7.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2f04/11487872/2a7979561650/10.1177_00187208241228636-fig8.jpg

相似文献

1
Trust with increasing and decreasing reliability.可信度递增和递减的信任。
Hum Factors. 2024 Dec;66(12):2569-2589. doi: 10.1177/00187208241228636. Epub 2024 Mar 6.
2
Effects of information source, pedigree, and reliability on operator interaction with decision support systems.信息来源、谱系及可靠性对操作员与决策支持系统交互的影响。
Hum Factors. 2007 Oct;49(5):773-85. doi: 10.1518/001872007X230154.
3
Providing different levels of accuracy about the reliability of automation to a human operator: impact on human performance.向人类操作员提供关于自动化可靠性的不同准确程度:对人类绩效的影响。
Ergonomics. 2023 Feb;66(2):217-226. doi: 10.1080/00140139.2022.2069870. Epub 2022 Apr 29.
4
Displaying contextual information reduces the costs of imperfect decision automation in rapid retasking of ISR assets.显示上下文信息可降低 ISR 资产快速重新分配中决策自动化不完善的成本。
Hum Factors. 2014 Sep;56(6):1036-49. doi: 10.1177/0018720813519675.
5
Enhancing component-specific trust with consumer automated systems through humanness design.通过人性化设计增强消费者自动化系统中特定组件的信任度。
Ergonomics. 2023 Feb;66(2):291-302. doi: 10.1080/00140139.2022.2079728. Epub 2022 May 27.
6
The Role of Trust as a Mediator Between System Characteristics and Response Behaviors.信任作为系统特征与反应行为之间调解因素的作用。
Hum Factors. 2015 Sep;57(6):947-58. doi: 10.1177/0018720815582261. Epub 2015 Apr 27.
7
Trust and the Compliance-Reliance Paradigm: The Effects of Risk, Error Bias, and Reliability on Trust and Dependence.信任与合规-依赖范式:风险、错误偏差和可靠性对信任与依赖的影响。
Hum Factors. 2017 May;59(3):333-345. doi: 10.1177/0018720816682648. Epub 2016 Dec 19.
8
A Little Anthropomorphism Goes a Long Way.些许拟人化作用巨大。
Hum Factors. 2017 Feb;59(1):116-133. doi: 10.1177/0018720816687205.
9
Operator adaptation to changes in system reliability under adaptable automation.在适应性自动化条件下,操作员对系统可靠性变化的适应情况。
Ergonomics. 2017 Sep;60(9):1261-1272. doi: 10.1080/00140139.2016.1261187. Epub 2016 Nov 25.
10
Not All Information Is Equal: Effects of Disclosing Different Types of Likelihood Information on Trust, Compliance and Reliance, and Task Performance in Human-Automation Teaming.并非所有信息都是平等的:在人机协作中披露不同类型可能性信息对信任、遵从和依赖的影响,以及对任务绩效的影响。
Hum Factors. 2020 Sep;62(6):987-1001. doi: 10.1177/0018720819862916. Epub 2019 Jul 26.

引用本文的文献

1
Calibrating Trust, Reliance and Dependence in Variable-Reliability Automation.校准对可变可靠性自动化的信任、依赖和依靠程度
Proc Hum Factors Ergon Soc Annu Meet. 2024 Sep;68(1):604-610. doi: 10.1177/10711813241277531. Epub 2024 Sep 2.

本文引用的文献

1
Automated decision aids: When are they advisors and when do they take control of human decision making?自动化决策辅助工具:它们何时是顾问,何时又控制了人类的决策过程?
J Exp Psychol Appl. 2023 Dec;29(4):849-868. doi: 10.1037/xap0000463. Epub 2023 Mar 6.
2
The Perception of Automation Reliability and Acceptance of Automated Advice.自动化可靠性的认知与对自动化建议的接受度
Hum Factors. 2023 Dec;65(8):1596-1612. doi: 10.1177/00187208211062985. Epub 2022 Jan 3.
3
Toward Quantifying Trust Dynamics: How People Adjust Their Trust After Moment-to-Moment Interaction With Automation.
量化信任动态:人们如何在与自动化进行实时交互后调整其信任。
Hum Factors. 2023 Aug;65(5):862-878. doi: 10.1177/00187208211034716. Epub 2021 Aug 29.
4
lab.js: A free, open, online study builder.lab.js:一个免费的、开放的、在线的研究构建器。
Behav Res Methods. 2022 Apr;54(2):556-573. doi: 10.3758/s13428-019-01283-5.
5
Automation reliability, human-machine system performance, and operator compliance: A study with airport security screeners supported by automated explosives detection systems for cabin baggage screening.自动化可靠性、人机系统性能和操作人员合规性:一项使用机场安检员的研究,这些安检员在手提行李安检中得到自动化爆炸物探测系统的支持。
Appl Ergon. 2020 Jul;86:103094. doi: 10.1016/j.apergo.2020.103094. Epub 2020 Apr 10.
6
Trust Mediating Reliability-Reliance Relationship in Supervisory Control of Human-Swarm Interactions.信任在人机群控交互监管中的中介可靠性-依赖关系。
Hum Factors. 2020 Dec;62(8):1237-1248. doi: 10.1177/0018720819879273. Epub 2019 Oct 7.
7
Not All Information Is Equal: Effects of Disclosing Different Types of Likelihood Information on Trust, Compliance and Reliance, and Task Performance in Human-Automation Teaming.并非所有信息都是平等的:在人机协作中披露不同类型可能性信息对信任、遵从和依赖的影响,以及对任务绩效的影响。
Hum Factors. 2020 Sep;62(6):987-1001. doi: 10.1177/0018720819862916. Epub 2019 Jul 26.
8
No Effect of Cue Format on Automation Dependence in an Aided Signal Detection Task.在辅助信号检测任务中,线索格式对自动化依赖没有影响。
Hum Factors. 2019 Mar;61(2):169-190. doi: 10.1177/0018720818802961. Epub 2018 Oct 18.
9
Understanding and Resolving Failures in Human-Robot Interaction: Literature Review and Model Development.理解与解决人机交互中的故障:文献综述与模型开发
Front Psychol. 2018 Jun 15;9:861. doi: 10.3389/fpsyg.2018.00861. eCollection 2018.
10
Trust and the Compliance-Reliance Paradigm: The Effects of Risk, Error Bias, and Reliability on Trust and Dependence.信任与合规-依赖范式:风险、错误偏差和可靠性对信任与依赖的影响。
Hum Factors. 2017 May;59(3):333-345. doi: 10.1177/0018720816682648. Epub 2016 Dec 19.