• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

基于人工信任的异构人机任务分配。

Heterogeneous human-robot task allocation based on artificial trust.

机构信息

Robotics Department, University of Michigan, Ann Arbor, MI, USA.

Military Institute of Engineering, Rio de Janeiro, Brazil.

出版信息

Sci Rep. 2022 Sep 12;12(1):15304. doi: 10.1038/s41598-022-19140-5.

DOI:10.1038/s41598-022-19140-5
PMID:36097023
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC9468009/
Abstract

Effective human-robot collaboration requires the appropriate allocation of indivisible tasks between humans and robots. A task allocation method that appropriately makes use of the unique capabilities of each agent (either a human or a robot) can improve team performance. This paper presents a novel task allocation method for heterogeneous human-robot teams based on artificial trust from a robot that can learn agent capabilities over time and allocate both existing and novel tasks. Tasks are allocated to the agent that maximizes the expected total reward. The expected total reward incorporates trust in the agent to successfully execute the task as well as the task reward and cost associated with using that agent for that task. Trust in an agent is computed from an artificial trust model, where trust is assessed along a capability dimension by comparing the belief in agent capabilities with the task requirements. An agent's capabilities are represented by a belief distribution and learned using stochastic task outcomes. Our task allocation method was simulated for a human-robot dyad. The team total reward of our artificial trust-based task allocation method outperforms other methods both when the human's capabilities are initially unknown and when the human's capabilities belief distribution has converged to the human's actual capabilities. Our task allocation method enables human-robot teams to maximize their joint performance.

摘要

有效的人机协作需要在人和机器人之间合理分配不可分割的任务。一种能够充分利用每个主体(人类或机器人)独特能力的任务分配方法可以提高团队绩效。本文提出了一种基于人工信任的异构人机团队的新任务分配方法,机器人可以随着时间的推移学习代理能力,并分配现有和新的任务。任务分配给能够最大化预期总奖励的代理。预期的总奖励包含对代理成功执行任务的信任,以及与使用该代理执行该任务相关的任务奖励和成本。代理的信任是从人工信任模型中计算出来的,其中信任是通过比较对代理能力的信念与任务要求来沿着能力维度进行评估的。代理的能力由置信度分布表示,并使用随机任务结果进行学习。我们的任务分配方法对人类-机器人对进行了模拟。当人类的能力最初未知时,以及当人类的能力置信度分布收敛到人类的实际能力时,我们的基于人工信任的任务分配方法的团队总奖励优于其他方法。我们的任务分配方法使人机团队能够最大限度地提高他们的联合绩效。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/664f/9468009/8db1bebaaf8d/41598_2022_19140_Fig6_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/664f/9468009/c077436ac4e9/41598_2022_19140_Fig1_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/664f/9468009/c9ea7d391121/41598_2022_19140_Fig2_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/664f/9468009/10ff65d186e1/41598_2022_19140_Fig3_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/664f/9468009/35ff2dd4f4f2/41598_2022_19140_Fig4_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/664f/9468009/d79477fa992c/41598_2022_19140_Fig5_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/664f/9468009/8db1bebaaf8d/41598_2022_19140_Fig6_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/664f/9468009/c077436ac4e9/41598_2022_19140_Fig1_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/664f/9468009/c9ea7d391121/41598_2022_19140_Fig2_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/664f/9468009/10ff65d186e1/41598_2022_19140_Fig3_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/664f/9468009/35ff2dd4f4f2/41598_2022_19140_Fig4_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/664f/9468009/d79477fa992c/41598_2022_19140_Fig5_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/664f/9468009/8db1bebaaf8d/41598_2022_19140_Fig6_HTML.jpg

相似文献

1
Heterogeneous human-robot task allocation based on artificial trust.基于人工信任的异构人机任务分配。
Sci Rep. 2022 Sep 12;12(1):15304. doi: 10.1038/s41598-022-19140-5.
2
The Influence of Robots' Fairness on Humans' Reward-Punishment Behaviors and Trust in Human-Robot Cooperative Teams.机器人公平性对人类奖惩行为和人机协作团队信任的影响。
Hum Factors. 2024 Apr;66(4):1103-1117. doi: 10.1177/00187208221133272. Epub 2022 Oct 11.
3
Exploring the effect of automation failure on the human's trustworthiness in human-agent teamwork.探索自动化故障对人机协作中人类信任度的影响。
Front Robot AI. 2023 Aug 23;10:1143723. doi: 10.3389/frobt.2023.1143723. eCollection 2023.
4
A Convex Optimization Approach to Multi-Robot Task Allocation and Path Planning.凸优化方法在多机器人任务分配与路径规划中的应用。
Sensors (Basel). 2023 May 26;23(11):5103. doi: 10.3390/s23115103.
5
The theory of mind and human-robot trust repair.心理理论与人际机器人信任修复。
Sci Rep. 2023 Jun 19;13(1):9877. doi: 10.1038/s41598-023-37032-0.
6
Human-robot interaction: how worker influence in task allocation improves autonomy.人机交互:工人在任务分配中的影响力如何提高自主性。
Ergonomics. 2022 Sep;65(9):1230-1244. doi: 10.1080/00140139.2022.2025912. Epub 2022 Jan 31.
7
Would a robot trust you? Developmental robotics model of trust and theory of mind.机器人会信任你吗?信任的发展机器人模型和心理理论。
Philos Trans R Soc Lond B Biol Sci. 2019 Apr 29;374(1771):20180032. doi: 10.1098/rstb.2018.0032.
8
Human-Robot Mutual Adaptation in Shared Autonomy.共享自主性中的人机相互适应
Proc ACM SIGCHI. 2017 Mar;2017:294-302. doi: 10.1145/2909824.3020252.
9
Moral Decision Making in Human-Agent Teams: Human Control and the Role of Explanations.人类-智能体团队中的道德决策:人类控制与解释的作用
Front Robot AI. 2021 May 27;8:640647. doi: 10.3389/frobt.2021.640647. eCollection 2021.
10
Learning Semantics of Gestural Instructions for Human-Robot Collaboration.学习用于人机协作的手势指令语义
Front Neurorobot. 2018 Mar 19;12:7. doi: 10.3389/fnbot.2018.00007. eCollection 2018.

引用本文的文献

1
Active interaction strategy generation for human-robot collaboration based on trust.基于信任的人机协作主动交互策略生成
Vis Comput Ind Biomed Art. 2025 Jun 23;8(1):16. doi: 10.1186/s42492-025-00198-7.

本文引用的文献

1
Challenging presumed technological superiority when working with (artificial) colleagues.与(人工)同事一起工作时,要挑战被认为的技术优势。
Sci Rep. 2022 Mar 8;12(1):3768. doi: 10.1038/s41598-022-07808-x.
2
Adaptable (Not Adaptive) Automation: Forefront of Human-Automation Teaming.自适应(非自适应)自动化:人机协作的前沿。
Hum Factors. 2022 Mar;64(2):269-277. doi: 10.1177/00187208211037457. Epub 2021 Aug 26.
3
An empirical investigation of trust in AI in a Chinese petrochemical enterprise based on institutional theory.基于制度理论的中国石化企业人工智能信任的实证研究。
Sci Rep. 2021 Jun 30;11(1):13564. doi: 10.1038/s41598-021-92904-7.
4
Promises and trust in human-robot interaction.人机交互中的承诺与信任。
Sci Rep. 2021 May 6;11(1):9687. doi: 10.1038/s41598-021-88622-9.
5
Evolving Trust in Robots: Specification Through Sequential and Comparative Meta-Analyses.机器人信任的演变:通过序列和对比元分析的规范。
Hum Factors. 2021 Nov;63(7):1196-1229. doi: 10.1177/0018720820922080. Epub 2020 Jun 10.
6
Long-Term Evaluation of Drivers' Behavioral Adaptation to an Adaptive Collision Avoidance System.驾驶员对自适应防撞系统的行为适应的长期评估。
Hum Factors. 2021 Nov;63(7):1295-1315. doi: 10.1177/0018720820926092. Epub 2020 Jun 2.
7
Adaptive Automation Triggered by EEG-Based Mental Workload Index: A Passive Brain-Computer Interface Application in Realistic Air Traffic Control Environment.基于脑电图的心理负荷指数触发的自适应自动化:现实空中交通管制环境中的被动式脑机接口应用。
Front Hum Neurosci. 2016 Oct 26;10:539. doi: 10.3389/fnhum.2016.00539. eCollection 2016.
8
Human-Robot Interaction: Status and Challenges.人机交互:现状与挑战。
Hum Factors. 2016 Jun;58(4):525-32. doi: 10.1177/0018720816644364. Epub 2016 Apr 20.
9
A meta-analysis of factors affecting trust in human-robot interaction.元分析影响人机交互信任的因素。
Hum Factors. 2011 Oct;53(5):517-27. doi: 10.1177/0018720811417254.
10
Trust in automation: designing for appropriate reliance.对自动化的信任:设计适度的依赖。
Hum Factors. 2004 Spring;46(1):50-80. doi: 10.1518/hfes.46.1.50_30392.