• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

自适应信任校准在人机协作中的应用。

Adaptive trust calibration for human-AI collaboration.

机构信息

Department of Informatics, School of Multidisciplinary Sciences, The Graduate University for Advanced Studies (SOKENDAI), Tokyo, Japan.

Digital Content and Media Sciences Research Division, National Institute of Informatics, Tokyo, Japan.

出版信息

PLoS One. 2020 Feb 21;15(2):e0229132. doi: 10.1371/journal.pone.0229132. eCollection 2020.

DOI:10.1371/journal.pone.0229132
PMID:32084201
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC7034851/
Abstract

Safety and efficiency of human-AI collaboration often depend on how humans could appropriately calibrate their trust towards the AI agents. Over-trusting the autonomous system sometimes causes serious safety issues. Although many studies focused on the importance of system transparency in keeping proper trust calibration, the research in detecting and mitigating improper trust calibration remains very limited. To fill these research gaps, we propose a method of adaptive trust calibration that consists of a framework for detecting the inappropriate calibration status by monitoring the user's reliance behavior and cognitive cues called "trust calibration cues" to prompt the user to reinitiate trust calibration. We evaluated our framework and four types of trust calibration cues in an online experiment using a drone simulator. A total of 116 participants performed pothole inspection tasks by using the drone's automatic inspection, the reliability of which could fluctuate depending upon the weather conditions. The participants needed to decide whether to rely on automatic inspection or to do the inspection manually. The results showed that adaptively presenting simple cues could significantly promote trust calibration during over-trust.

摘要

人机协作的安全性和效率通常取决于人类如何适当地调整对人工智能代理的信任程度。过度信任自治系统有时会导致严重的安全问题。尽管许多研究都集中在系统透明度在保持适当信任校准方面的重要性,但检测和减轻不当信任校准的研究仍然非常有限。为了填补这些研究空白,我们提出了一种自适应信任校准方法,该方法包括一个通过监测用户的依赖行为和称为“信任校准线索”的认知线索来检测不适当的校准状态的框架,以提示用户重新开始信任校准。我们在一个使用无人机模拟器的在线实验中评估了我们的框架和四种类型的信任校准线索。共有 116 名参与者通过使用无人机的自动检查来执行坑洼检查任务,其可靠性可能会根据天气条件而波动。参与者需要决定是依赖自动检查还是手动检查。结果表明,自适应地呈现简单的线索可以显著促进过度信任期间的信任校准。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/cb6c/7034851/2fec46c7d485/pone.0229132.g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/cb6c/7034851/aba05b1a9cc4/pone.0229132.g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/cb6c/7034851/1adb20437fdc/pone.0229132.g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/cb6c/7034851/61db5d96b82f/pone.0229132.g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/cb6c/7034851/3a47829241f8/pone.0229132.g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/cb6c/7034851/3c5ea682ee8b/pone.0229132.g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/cb6c/7034851/1b5a9d4c9eb9/pone.0229132.g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/cb6c/7034851/42fd60fe8648/pone.0229132.g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/cb6c/7034851/2fec46c7d485/pone.0229132.g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/cb6c/7034851/aba05b1a9cc4/pone.0229132.g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/cb6c/7034851/1adb20437fdc/pone.0229132.g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/cb6c/7034851/61db5d96b82f/pone.0229132.g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/cb6c/7034851/3a47829241f8/pone.0229132.g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/cb6c/7034851/3c5ea682ee8b/pone.0229132.g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/cb6c/7034851/1b5a9d4c9eb9/pone.0229132.g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/cb6c/7034851/42fd60fe8648/pone.0229132.g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/cb6c/7034851/2fec46c7d485/pone.0229132.g008.jpg

相似文献

1
Adaptive trust calibration for human-AI collaboration.自适应信任校准在人机协作中的应用。
PLoS One. 2020 Feb 21;15(2):e0229132. doi: 10.1371/journal.pone.0229132. eCollection 2020.
2
The reliability and transparency bases of trust in human-swarm interaction: principles and implications.信任在人机群体交互中的可靠性和透明度基础:原则与启示。
Ergonomics. 2020 Sep;63(9):1116-1132. doi: 10.1080/00140139.2020.1764112. Epub 2020 May 13.
3
In human-machine trust, humans rely on a simple averaging strategy.在人机信任中,人类依赖于一种简单的平均策略。
Cogn Res Princ Implic. 2024 Sep 2;9(1):58. doi: 10.1186/s41235-024-00583-5.
4
Inferring Trust From Users' Behaviours; Agents' Predictability Positively Affects Trust, Task Performance and Cognitive Load in Human-Agent Real-Time Collaboration.从用户行为推断信任;在人机实时协作中,智能体的可预测性对信任、任务绩效和认知负荷产生积极影响。
Front Robot AI. 2021 Jul 8;8:642201. doi: 10.3389/frobt.2021.642201. eCollection 2021.
5
Trust in smart systems: sharing driving goals and giving information to increase trustworthiness and acceptability of smart systems in cars.对智能系统的信任:分享驾驶目标和提供信息,以提高智能系统在汽车中的可信度和可接受性。
Hum Factors. 2012 Oct;54(5):799-810. doi: 10.1177/0018720812443825.
6
Cognitive and behavioral markers for human detection error in AI-assisted bridge inspection.人工智能辅助桥梁检测中的人为检测错误的认知和行为标记。
Appl Ergon. 2024 Nov;121:104346. doi: 10.1016/j.apergo.2024.104346. Epub 2024 Jul 16.
7
More Is Not Always Better: Impacts of AI-Generated Confidence and Explanations in Human-Automation Interaction.并非越多越好:人工智能生成的信心和解释对人机交互的影响。
Hum Factors. 2024 Dec;66(12):2606-2620. doi: 10.1177/00187208241234810. Epub 2024 Mar 4.
8
The More You Know: Trust Dynamics and Calibration in Highly Automated Driving and the Effects of Take-Overs, System Malfunction, and System Transparency.知之愈多:高度自动化驾驶中的信任动态和校准,以及接管、系统故障和系统透明度的影响。
Hum Factors. 2020 Aug;62(5):718-736. doi: 10.1177/0018720819853686. Epub 2019 Jun 24.
9
Before and beyond trust: reliance in medical AI.信任之前和之外:医疗 AI 中的依赖。
J Med Ethics. 2022 Nov;48(11):852-856. doi: 10.1136/medethics-2020-107095. Epub 2021 Aug 23.
10
Trust in artificial intelligence for medical diagnoses.对人工智能在医学诊断中的信任。
Prog Brain Res. 2020;253:263-282. doi: 10.1016/bs.pbr.2020.06.006. Epub 2020 Jul 2.

引用本文的文献

1
Reframing individual roles in collaboration: digital identity construction and adaptive mechanisms for resistance-based professional skills in AI-human intelligence symbiosis.重新界定协作中的个体角色:人工智能与人类智能共生中基于抗阻专业技能的数字身份构建及适应机制
Front Psychol. 2025 Aug 8;16:1652130. doi: 10.3389/fpsyg.2025.1652130. eCollection 2025.
2
Trust in Medical AI: The Case of mHealth Diabetes Apps.对医疗人工智能的信任:移动健康糖尿病应用程序的案例
J Eval Clin Pract. 2025 Aug;31(5):e70216. doi: 10.1111/jep.70216.
3
Understanding dimensions of trust in AI through quantitative cognition: Implications for human-AI collaboration.

本文引用的文献

1
From Trust in Automation to Decision Neuroscience: Applying Cognitive Neuroscience Methods to Understand and Improve Interaction Decisions Involved in Human Automation Interaction.从对自动化的信任到决策神经科学:应用认知神经科学方法理解和改善人类与自动化交互中的交互决策。
Front Hum Neurosci. 2016 Jun 30;10:290. doi: 10.3389/fnhum.2016.00290. eCollection 2016.
2
Intelligent Agent Transparency in Human-Agent Teaming for Multi-UxV Management.多无人机/无人车管理中人类-智能体协作的智能体透明度
Hum Factors. 2016 May;58(3):401-15. doi: 10.1177/0018720815621206. Epub 2016 Feb 11.
3
Keep Your Scanners Peeled: Gaze Behavior as a Measure of Automation Trust During Highly Automated Driving.
通过定量认知理解对人工智能的信任维度:对人机协作的启示。
PLoS One. 2025 Jul 2;20(7):e0326558. doi: 10.1371/journal.pone.0326558. eCollection 2025.
4
Self-assessment in machines boosts human Trust.机器中的自我评估增强了人类的信任。
Front Robot AI. 2025 May 26;12:1557075. doi: 10.3389/frobt.2025.1557075. eCollection 2025.
5
Psychological Factors Influencing Appropriate Reliance on AI-enabled Clinical Decision Support Systems: Experimental Web-Based Study Among Dermatologists.影响对人工智能临床决策支持系统合理依赖的心理因素:皮肤科医生基于网络的实验研究
J Med Internet Res. 2025 Apr 4;27:e58660. doi: 10.2196/58660.
6
Facilitating Trust Calibration in Artificial Intelligence-Driven Diagnostic Decision Support Systems for Determining Physicians' Diagnostic Accuracy: Quasi-Experimental Study.促进人工智能驱动的诊断决策支持系统中信任校准以确定医生的诊断准确性:准实验研究。
JMIR Form Res. 2024 Nov 27;8:e58666. doi: 10.2196/58666.
7
Meaningful Communication but not Superficial Anthropomorphism Facilitates Human-Automation Trust Calibration: The Human-Automation Trust Expectation Model (HATEM).有意义的沟通而非表面的拟人化有助于人类-自动化信任校准:人类-自动化信任期望模型(HATEM)。
Hum Factors. 2024 Nov;66(11):2485-2502. doi: 10.1177/00187208231218156. Epub 2023 Dec 2.
8
Fostering Collective Intelligence in Human-AI Collaboration: Laying the Groundwork for COHUMAIN.在人机协作中培养集体智慧:为COHUMAIN奠定基础。
Top Cogn Sci. 2025 Apr;17(2):189-216. doi: 10.1111/tops.12679. Epub 2023 Jun 29.
9
Influence of agent's self-disclosure on human empathy.主体对自我暴露的反应对人类同理心的影响。
PLoS One. 2023 May 10;18(5):e0283955. doi: 10.1371/journal.pone.0283955. eCollection 2023.
10
Acceptance, initial trust formation, and human biases in artificial intelligence: Focus on clinicians.人工智能中的接受度、初始信任形成及人类偏见:聚焦临床医生
Front Digit Health. 2022 Aug 23;4:966174. doi: 10.3389/fdgth.2022.966174. eCollection 2022.
密切关注:在高度自动驾驶过程中,注视行为作为衡量自动化信任度的指标
Hum Factors. 2016 May;58(3):509-19. doi: 10.1177/0018720815625744. Epub 2016 Feb 3.
4
Trust in automation: integrating empirical evidence on factors that influence trust.对自动化的信任:整合关于影响信任因素的实证证据。
Hum Factors. 2015 May;57(3):407-34. doi: 10.1177/0018720814547570. Epub 2014 Sep 2.
5
Evaluating Amazon's Mechanical Turk as a tool for experimental behavioral research.评估亚马逊土耳其机器人作为实验行为研究工具。
PLoS One. 2013;8(3):e57410. doi: 10.1371/journal.pone.0057410. Epub 2013 Mar 13.
6
A meta-analysis of factors affecting trust in human-robot interaction.元分析影响人机交互信任的因素。
Hum Factors. 2011 Oct;53(5):517-27. doi: 10.1177/0018720811417254.
7
Supporting trust calibration and the effective use of decision aids by presenting dynamic system confidence information.通过呈现动态系统置信信息来支持信任校准和决策辅助工具的有效使用。
Hum Factors. 2006 Winter;48(4):656-65. doi: 10.1518/001872006779166334.
8
Trust in automation: designing for appropriate reliance.对自动化的信任:设计适度的依赖。
Hum Factors. 2004 Spring;46(1):50-80. doi: 10.1518/hfes.46.1.50_30392.
9
A model for types and levels of human interaction with automation.人类与自动化交互的类型和层次模型。
IEEE Trans Syst Man Cybern A Syst Hum. 2000 May;30(3):286-97. doi: 10.1109/3468.844354.
10
Driven to distraction: dual-Task studies of simulated driving and conversing on a cellular telephone.注意力分散:模拟驾驶与使用手机通话的双重任务研究。
Psychol Sci. 2001 Nov;12(6):462-6. doi: 10.1111/1467-9280.00386.