• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

与(人工)同事一起工作时,要挑战被认为的技术优势。

Challenging presumed technological superiority when working with (artificial) colleagues.

机构信息

Department of Psychology and Ergonomics, Technische Universität Berlin, Marchstr. 12, F7, 10587, Berlin, Germany.

出版信息

Sci Rep. 2022 Mar 8;12(1):3768. doi: 10.1038/s41598-022-07808-x.

DOI:10.1038/s41598-022-07808-x
PMID:35260683
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC8904495/
Abstract

Technological advancements are ubiquitously supporting or even replacing humans in all areas of life, bringing the potential for human-technology symbiosis but also novel challenges. To address these challenges, we conducted three experiments in different task contexts ranging from loan assignment over X-Ray evaluation to process industry. Specifically, we investigated the impact of support agent (artificial intelligence, decision support system, or human) and failure experience (one vs. none) on trust-related aspects of human-agent interaction. This included not only the subjective evaluation of the respective agent in terms of trust, reliability, and responsibility, when working together, but also a change in perspective to the willingness to be assessed oneself by the agent. In contrast to a presumed technological superiority, we show a general advantage with regard to trust and responsibility of human support over both technical support systems (i.e., artificial intelligence and decision support system), regardless of task context from the collaborative perspective. This effect reversed to a preference for technical systems when switching the perspective to being assessed. These findings illustrate an imperfect automation schema from the perspective of the advice-taker and demonstrate the importance of perspective when working with or being assessed by machine intelligence.

摘要

技术进步正在生活的各个领域广泛地支持甚至取代人类,带来了人类与技术共生的潜力,但也带来了新的挑战。为了应对这些挑战,我们在不同的任务环境中进行了三项实验,从贷款分配到 X 光评估再到流程工业。具体来说,我们研究了支持代理(人工智能、决策支持系统或人类)和失败经验(有或没有)对人机交互中与信任相关的方面的影响。这不仅包括在合作时对各自代理在信任、可靠性和责任方面的主观评估,还包括对代理评估自身意愿的视角转变。与技术优势的假定相反,无论从协作的角度来看,我们都展示了人类支持相对于两种技术支持系统(即人工智能和决策支持系统)在信任和责任方面的一般优势,而不是任务上下文。当视角切换到被评估时,这种效果又转而倾向于技术系统。这些发现从接受建议者的角度说明了自动化模式的不完美,并说明了在与机器智能合作或被其评估时视角的重要性。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/828d/8904495/5ac88f9ef4b4/41598_2022_7808_Fig2_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/828d/8904495/e844c4851e47/41598_2022_7808_Fig1_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/828d/8904495/5ac88f9ef4b4/41598_2022_7808_Fig2_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/828d/8904495/e844c4851e47/41598_2022_7808_Fig1_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/828d/8904495/5ac88f9ef4b4/41598_2022_7808_Fig2_HTML.jpg

相似文献

1
Challenging presumed technological superiority when working with (artificial) colleagues.与(人工)同事一起工作时,要挑战被认为的技术优势。
Sci Rep. 2022 Mar 8;12(1):3768. doi: 10.1038/s41598-022-07808-x.
2
The (Im)perfect Automation Schema: Who Is Trusted More, Automated or Human Decision Support?(不)完美的自动化模式:谁更值得信赖,自动化决策支持还是人工决策支持?
Hum Factors. 2024 Aug;66(8):1995-2007. doi: 10.1177/00187208231197347. Epub 2023 Aug 26.
3
Trust in AI: why we should be designing for APPROPRIATE reliance.信任人工智能:为什么我们应该设计出适当的依赖关系。
J Am Med Inform Assoc. 2021 Dec 28;29(1):207-212. doi: 10.1093/jamia/ocab238.
4
More Is Not Always Better: Impacts of AI-Generated Confidence and Explanations in Human-Automation Interaction.并非越多越好:人工智能生成的信心和解释对人机交互的影响。
Hum Factors. 2024 Dec;66(12):2606-2620. doi: 10.1177/00187208241234810. Epub 2024 Mar 4.
5
Trust in Artificial Intelligence: Meta-Analytic Findings.对人工智能的信任:元分析研究结果。
Hum Factors. 2023 Mar;65(2):337-359. doi: 10.1177/00187208211013988. Epub 2021 May 28.
6
Effects of information source, pedigree, and reliability on operator interaction with decision support systems.信息来源、谱系及可靠性对操作员与决策支持系统交互的影响。
Hum Factors. 2007 Oct;49(5):773-85. doi: 10.1518/001872007X230154.
7
Automation bias: empirical results assessing influencing factors.自动化偏差:影响因素的实证研究结果。
Int J Med Inform. 2014 May;83(5):368-75. doi: 10.1016/j.ijmedinf.2014.01.001. Epub 2014 Jan 17.
8
Industry Perspective on Artificial Intelligence/Machine Learning in Pharmacovigilance.药物警戒人工智能/机器学习的行业视角。
Drug Saf. 2022 May;45(5):439-448. doi: 10.1007/s40264-022-01164-5. Epub 2022 May 17.
9
Patients' and Clinicians' Perceived Trust in Internet-of-Things Systems to Support Asthma Self-management: Qualitative Interview Study.患者和临床医生对物联网系统支持哮喘自我管理的信任感知:定性访谈研究。
JMIR Mhealth Uhealth. 2021 Jul 16;9(7):e24127. doi: 10.2196/24127.
10
In human-machine trust, humans rely on a simple averaging strategy.在人机信任中,人类依赖于一种简单的平均策略。
Cogn Res Princ Implic. 2024 Sep 2;9(1):58. doi: 10.1186/s41235-024-00583-5.

引用本文的文献

1
Judgments of Difficulty (JODs) While Observing an Automated System Support the Media Equation and Unique Agent Hypotheses.观察自动化系统时的难度判断(JODs)支持媒体等式和独特代理假设。
Hum Factors. 2025 Apr;67(4):347-366. doi: 10.1177/00187208241273379. Epub 2024 Aug 18.
2
Anthropomorphic framing and failure comprehensibility influence different facets of trust towards industrial robots.拟人化框架和故障可理解性会影响对工业机器人信任的不同方面。
Front Robot AI. 2023 Sep 7;10:1235017. doi: 10.3389/frobt.2023.1235017. eCollection 2023.
3
Humans versus machines: Who is perceived to decide fairer? Experimental evidence on attitudes toward automated decision-making.

本文引用的文献

1
Trust in Artificial Intelligence: Meta-Analytic Findings.对人工智能的信任:元分析研究结果。
Hum Factors. 2023 Mar;65(2):337-359. doi: 10.1177/00187208211013988. Epub 2021 May 28.
2
Measuring the Efficiency of Automation-Aided Performance in a Simulated Baggage Screening Task.测量自动化辅助性能在模拟行李安检任务中的效率。
Hum Factors. 2022 Sep;64(6):945-961. doi: 10.1177/0018720820983632. Epub 2021 Jan 28.
3
Human Performance Consequences of Automated Decision Aids: The Impact of Time Pressure.自动化决策辅助工具对人类绩效的影响:时间压力的作用
人类与机器:谁被认为决策更公平?关于对自动化决策态度的实验证据
Patterns (N Y). 2022 Sep 29;3(10):100591. doi: 10.1016/j.patter.2022.100591. eCollection 2022 Oct 14.
4
Heterogeneous human-robot task allocation based on artificial trust.基于人工信任的异构人机任务分配。
Sci Rep. 2022 Sep 12;12(1):15304. doi: 10.1038/s41598-022-19140-5.
Hum Factors. 2022 Jun;64(4):617-634. doi: 10.1177/0018720820965019. Epub 2020 Oct 28.
4
Transparency and reproducibility in artificial intelligence.人工智能中的透明度和可重复性。
Nature. 2020 Oct;586(7829):E14-E16. doi: 10.1038/s41586-020-2766-y. Epub 2020 Oct 14.
5
People Reject Algorithms in Uncertain Decision Domains Because They Have Diminishing Sensitivity to Forecasting Error.人们在不确定决策领域拒绝算法,因为他们对预测误差的敏感性降低。
Psychol Sci. 2020 Oct;31(10):1302-1314. doi: 10.1177/0956797620948841. Epub 2020 Sep 11.
6
Artificial Intelligence in Skin Cancer Diagnostics: The Patients' Perspective.皮肤癌诊断中的人工智能:患者视角
Front Med (Lausanne). 2020 Jun 2;7:233. doi: 10.3389/fmed.2020.00233. eCollection 2020.
7
International evaluation of an AI system for breast cancer screening.国际乳腺癌筛查人工智能系统评估。
Nature. 2020 Jan;577(7788):89-94. doi: 10.1038/s41586-019-1799-6. Epub 2020 Jan 1.
8
Artificial Intelligence, Responsibility Attribution, and a Relational Justification of Explainability.人工智能、责任归因与可解释性的关系论证
Sci Eng Ethics. 2020 Aug;26(4):2051-2068. doi: 10.1007/s11948-019-00146-8. Epub 2019 Oct 24.
9
Agency plus automation: Designing artificial intelligence into interactive systems.代理加自动化:将人工智能设计到交互系统中。
Proc Natl Acad Sci U S A. 2019 Feb 5;116(6):1844-1850. doi: 10.1073/pnas.1807184115.
10
Diagnostic Assessment of Deep Learning Algorithms for Detection of Lymph Node Metastases in Women With Breast Cancer.用于检测乳腺癌女性患者淋巴结转移的深度学习算法的诊断评估
JAMA. 2017 Dec 12;318(22):2199-2210. doi: 10.1001/jama.2017.14585.