• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

自动化系统测试信任度初始验证(TOAST)。

Initial validation of the trust of automated systems test (TOAST).

机构信息

Institute for Defense Analyses.

出版信息

J Soc Psychol. 2020 Nov 1;160(6):735-750. doi: 10.1080/00224545.2020.1749020. Epub 2020 Apr 16.

DOI:10.1080/00224545.2020.1749020
PMID:32297844
Abstract

Trust is a key determinant of whether people rely on automated systems in the military and the public. However, there is currently no standard for measuring trust in automated systems. In the present studies, we propose a scale to measure trust in automated systems that is grounded in current research and theory on trust formation, which we refer to as the Trust in Automated Systems Test (TOAST). We evaluated both the reliability of the scale structure and criterion validity using independent, military-affiliated and civilian samples. In both studies we found that the TOAST exhibited a two-factor structure, measuring system understanding and performance (respectively), and that factor scores significantly predicted scores on theoretically related constructs demonstrating clear criterion validity. We discuss the implications of our findings for advancing the empirical literature and in improving interface design.

摘要

信任是人们是否依赖军事和公共领域自动化系统的关键决定因素。然而,目前还没有衡量对自动化系统信任的标准。在本研究中,我们提出了一个衡量对自动化系统信任的量表,该量表基于信任形成的当前研究和理论,我们称之为对自动化系统的信任测试(TOAST)。我们使用独立的、与军队有关的和民用的样本评估了量表结构的可靠性和效标效度。在两项研究中,我们发现 TOAST 表现出了两个因素结构,分别衡量系统理解和性能,并且因子得分显著预测了与理论相关的构念的得分,表现出明显的效标效度。我们讨论了我们的发现对推进实证文献和改进界面设计的意义。

相似文献

1
Initial validation of the trust of automated systems test (TOAST).自动化系统测试信任度初始验证(TOAST)。
J Soc Psychol. 2020 Nov 1;160(6):735-750. doi: 10.1080/00224545.2020.1749020. Epub 2020 Apr 16.
2
Trust in automation: integrating empirical evidence on factors that influence trust.对自动化的信任:整合关于影响信任因素的实证证据。
Hum Factors. 2015 May;57(3):407-34. doi: 10.1177/0018720814547570. Epub 2014 Sep 2.
3
Measuring Individual Differences in the Perfect Automation Schema.测量完美自动化模式中的个体差异。
Hum Factors. 2015 Aug;57(5):740-53. doi: 10.1177/0018720815581247. Epub 2015 Apr 16.
4
Effects of information source, pedigree, and reliability on operator interaction with decision support systems.信息来源、谱系及可靠性对操作员与决策支持系统交互的影响。
Hum Factors. 2007 Oct;49(5):773-85. doi: 10.1518/001872007X230154.
5
Keep Your Scanners Peeled: Gaze Behavior as a Measure of Automation Trust During Highly Automated Driving.密切关注:在高度自动驾驶过程中,注视行为作为衡量自动化信任度的指标
Hum Factors. 2016 May;58(3):509-19. doi: 10.1177/0018720815625744. Epub 2016 Feb 3.
6
Introduction matters: Manipulating trust in automation and reliance in automated driving.引言很重要:操纵对自动化的信任和对自动驾驶的依赖。
Appl Ergon. 2018 Jan;66:18-31. doi: 10.1016/j.apergo.2017.07.006. Epub 2017 Aug 12.
7
Human-human reliance in the context of automation.自动化背景下的人际依赖。
Hum Factors. 2012 Feb;54(1):112-21. doi: 10.1177/0018720811427034.
8
Influencing Trust for Human-Automation Collaborative Scheduling of Multiple Unmanned Vehicles.影响多无人车辆人机协作调度中的信任度
Hum Factors. 2015 Nov;57(7):1208-18. doi: 10.1177/0018720815587803. Epub 2015 Jun 9.
9
Why Do I Have to Drive Now? Post Hoc Explanations of Takeover Requests.为什么我现在必须开车?接管请求的事后解释。
Hum Factors. 2018 May;60(3):305-323. doi: 10.1177/0018720817747730. Epub 2017 Dec 28.
10
Trust and Distrust of Automated Parking in a Tesla Model X.信任与不信任的特斯拉 Model X 自动泊车。
Hum Factors. 2020 Mar;62(2):194-210. doi: 10.1177/0018720819865412. Epub 2019 Aug 16.

引用本文的文献

1
Patients', clinicians' and developers' perspectives and experiences of artificial intelligence in cardiac healthcare: A qualitative study.患者、临床医生和开发者对人工智能在心脏保健领域的看法与体验:一项定性研究。
Digit Health. 2025 Jun 16;11:20552076251328578. doi: 10.1177/20552076251328578. eCollection 2025 Jan-Dec.
2
Intermediate Judgments and Trust in Artificial Intelligence-Supported Decision-Making.人工智能支持的决策中的中间判断与信任
Entropy (Basel). 2024 Jun 8;26(6):500. doi: 10.3390/e26060500.
3
Meaningful Communication but not Superficial Anthropomorphism Facilitates Human-Automation Trust Calibration: The Human-Automation Trust Expectation Model (HATEM).
有意义的沟通而非表面的拟人化有助于人类-自动化信任校准:人类-自动化信任期望模型(HATEM)。
Hum Factors. 2024 Nov;66(11):2485-2502. doi: 10.1177/00187208231218156. Epub 2023 Dec 2.
4
Measurement of Trust in Automation: A Narrative Review and Reference Guide.自动化信任度的测量:叙述性综述与参考指南。
Front Psychol. 2021 Oct 19;12:604977. doi: 10.3389/fpsyg.2021.604977. eCollection 2021.