• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

信息来源、谱系及可靠性对操作员与决策支持系统交互的影响。

Effects of information source, pedigree, and reliability on operator interaction with decision support systems.

作者信息

Madhavan Poornima, Wiegmann Douglas A

机构信息

University of Illinois at Urbana-Champaign, Champaign, Illinois, USA.

出版信息

Hum Factors. 2007 Oct;49(5):773-85. doi: 10.1518/001872007X230154.

DOI:10.1518/001872007X230154
PMID:17915596
Abstract

OBJECTIVE

Two experiments are described that examined operators' perceptions of decision aids.

BACKGROUND

Research has suggested certain biases against automation that influence human interaction with automation. We differentiated preconceived biases from post hoc biases and examined their effects on advice acceptance.

METHOD

In Study 1 we examined operators' trust in and perceived reliability of humans versus automation of varying pedigree (expert vs. novice), based on written descriptions of these advisers prior to operators' interacting with these advisers. In Study 2 we examined participants' post hoc trust in, perceived reliability of, and dependence on these advisers after their objective experience of advisers' reliability (90% vs. 70%) in a luggage-screening task.

RESULTS

In Study 1 measures of perceived reliability indicated that automation was perceived as more reliable than humans across pedigrees. Measures of trust indicated that automated "novices" were trusted more than human "novices"; human "experts" were trusted more than automated "experts." In Study 2, perceived reliability varied as a function of pedigree, whereas subjective trust was always higher for automation than for humans. Advice acceptance from novice automation was always higher than from novice humans. However, when advisers were 70% reliable, errors generated by expert automation led to a drop in compliance/reliance on expert automation relative to expert humans.

CONCLUSION

Preconceived expectations of automation influence the use of these aids in actual tasks.

APPLICATION

The results provide a reference point for deriving indices of "optimal" user interaction with decision aids and for developing frameworks of trust in decision support systems.

摘要

目的

描述了两项考察操作员对决策辅助工具看法的实验。

背景

研究表明,存在某些针对自动化的偏见,这些偏见会影响人与自动化的交互。我们区分了先入为主的偏见和事后偏见,并考察了它们对建议接受度的影响。

方法

在研究1中,我们根据操作员与不同出身(专家与新手)的自动化和人类顾问交互之前的书面描述,考察了操作员对他们的信任以及感知到的可靠性。在研究2中,我们考察了参与者在行李安检任务中客观体验了顾问的可靠性(90%对70%)之后,对这些顾问的事后信任、感知到的可靠性以及依赖程度。

结果

在研究1中,感知可靠性的测量表明,在不同出身中,自动化被认为比人类更可靠。信任度测量表明,自动化“新手”比人类“新手”更受信任;人类“专家”比自动化“专家”更受信任。在研究2中,感知到的可靠性因出身而异,而自动化的主观信任度始终高于人类。新手自动化的建议接受度总是高于新手人类。然而,当顾问的可靠性为70%时,与专家人类相比,专家自动化产生的错误导致对专家自动化的依从性/依赖度下降。

结论

对自动化的先入为主的期望会影响这些辅助工具在实际任务中的使用。

应用

研究结果为得出与决策辅助工具“最佳”用户交互的指标以及为开发决策支持系统中的信任框架提供了参考点。

相似文献

1
Effects of information source, pedigree, and reliability on operator interaction with decision support systems.信息来源、谱系及可靠性对操作员与决策支持系统交互的影响。
Hum Factors. 2007 Oct;49(5):773-85. doi: 10.1518/001872007X230154.
2
Who's the real expert here? Pedigree's unique bias on trust between human and automated advisers.这里谁才是真正的专家? pedigree 对人类和自动化顾问之间信任的独特偏见。
Appl Ergon. 2019 Nov;81:102907. doi: 10.1016/j.apergo.2019.102907. Epub 2019 Jul 26.
3
Relationship between automation trust and operator performance for the novice and expert in spacecraft rendezvous and docking (RVD).在航天器交会对接(RVD)中,新手和专家对自动化信任与操作人员绩效的关系。
Appl Ergon. 2018 Sep;71:1-8. doi: 10.1016/j.apergo.2018.03.014. Epub 2018 Apr 1.
4
Trust in automation. Part II. Experimental studies of trust and human intervention in a process control simulation.对自动化的信任。第二部分。过程控制模拟中信任与人为干预的实验研究。
Ergonomics. 1996 Mar;39(3):429-60. doi: 10.1080/00140139608964474.
5
Not All Information Is Equal: Effects of Disclosing Different Types of Likelihood Information on Trust, Compliance and Reliance, and Task Performance in Human-Automation Teaming.并非所有信息都是平等的:在人机协作中披露不同类型可能性信息对信任、遵从和依赖的影响,以及对任务绩效的影响。
Hum Factors. 2020 Sep;62(6):987-1001. doi: 10.1177/0018720819862916. Epub 2019 Jul 26.
6
Supporting trust calibration and the effective use of decision aids by presenting dynamic system confidence information.通过呈现动态系统置信信息来支持信任校准和决策辅助工具的有效使用。
Hum Factors. 2006 Winter;48(4):656-65. doi: 10.1518/001872006779166334.
7
Team performance in networked supervisory control of unmanned air vehicles: effects of automation, working memory, and communication content.网络化监控无人机的团队绩效:自动化、工作记忆和通信内容的影响。
Hum Factors. 2014 May;56(3):463-75. doi: 10.1177/0018720813496269.
8
Individual differences in the calibration of trust in automation.自动化信任校准中的个体差异。
Hum Factors. 2015 Jun;57(4):545-56. doi: 10.1177/0018720814564422. Epub 2014 Dec 29.
9
Complacency and bias in human use of automation: an attentional integration.人类在使用自动化时的自满和偏见:注意力的综合。
Hum Factors. 2010 Jun;52(3):381-410. doi: 10.1177/0018720810376055.
10
Trust and the Compliance-Reliance Paradigm: The Effects of Risk, Error Bias, and Reliability on Trust and Dependence.信任与合规-依赖范式:风险、错误偏差和可靠性对信任与依赖的影响。
Hum Factors. 2017 May;59(3):333-345. doi: 10.1177/0018720816682648. Epub 2016 Dec 19.

引用本文的文献

1
Interacting with fallible AI: is distrust helpful when receiving AI misclassifications?与易出错的人工智能交互:在收到人工智能错误分类时,不信任是否有帮助?
Front Psychol. 2025 May 27;16:1574809. doi: 10.3389/fpsyg.2025.1574809. eCollection 2025.
2
Political ideology shapes support for the use of AI in policy-making.政治意识形态塑造了对人工智能在政策制定中应用的支持。
Front Artif Intell. 2024 Oct 30;7:1447171. doi: 10.3389/frai.2024.1447171. eCollection 2024.
3
How do humans learn about the reliability of automation?人类如何了解自动化的可靠性?
Cogn Res Princ Implic. 2024 Feb 16;9(1):8. doi: 10.1186/s41235-024-00533-1.
4
Transparent Automated Advice to Mitigate the Impact of Variation in Automation Reliability.透明化自动化建议以减轻自动化可靠性变化的影响。
Hum Factors. 2024 Aug;66(8):2008-2024. doi: 10.1177/00187208231196738. Epub 2023 Aug 27.
5
Challenging presumed technological superiority when working with (artificial) colleagues.与(人工)同事一起工作时,要挑战被认为的技术优势。
Sci Rep. 2022 Mar 8;12(1):3768. doi: 10.1038/s41598-022-07808-x.
6
Determinants of acceptance of patients with heart failure and their informal caregivers regarding an interactive decision-making system: a qualitative study.心力衰竭患者及其非专业照护者对互动式决策系统的接受度的决定因素:一项定性研究。
BMJ Open. 2021 Jun 16;11(6):e046160. doi: 10.1136/bmjopen-2020-046160.
7
The Next Generation of Medical Decision Support: A Roadmap Toward Transparent Expert Companions.下一代医学决策支持:通往透明专家助手的路线图。
Front Artif Intell. 2020 Sep 24;3:507973. doi: 10.3389/frai.2020.507973. eCollection 2020.
8
Automated Systems and Trust: Mineworkers' Trust in Proximity Detection Systems for Mobile Machines.自动化系统与信任:矿工对移动机器接近检测系统的信任
Saf Health Work. 2019 Dec;10(4):461-469. doi: 10.1016/j.shaw.2019.09.003. Epub 2019 Sep 25.
9
Feedback and Direction Sources Influence Navigation Decision Making on Experienced Routes.反馈和方向来源会影响在熟悉路线上的导航决策。
Front Psychol. 2019 Sep 13;10:2104. doi: 10.3389/fpsyg.2019.02104. eCollection 2019.
10
Effects of Trust, Self-Confidence, and Feedback on the Use of Decision Automation.信任、自信和反馈对决策自动化使用的影响。
Front Psychol. 2019 Mar 12;10:519. doi: 10.3389/fpsyg.2019.00519. eCollection 2019.