• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

通过在人机混合系统中委托来弥补感知故障。

Compensating for Sensing Failures via Delegation in Human-AI Hybrid Systems.

机构信息

Department of Computer Science, Universitá di Pisa, 56124 Pisa, Italy.

Institute for Informatics and Telematics (IIT), National Research Council (CNR), 56124 Pisa, Italy.

出版信息

Sensors (Basel). 2023 Mar 24;23(7):3409. doi: 10.3390/s23073409.

DOI:10.3390/s23073409
PMID:37050469
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC10098943/
Abstract

Given the increasing prevalence of intelligent systems capable of autonomous actions or augmenting human activities, it is important to consider scenarios in which the human, autonomous system, or both can exhibit failures as a result of one of several contributing factors (e.g., perception). Failures for either humans or autonomous agents can lead to simply a reduced performance level, or a failure can lead to something as severe as injury or death. For our topic, we consider the hybrid human-AI teaming case where a managing agent is tasked with identifying when to perform a delegated assignment and whether the human or autonomous system should gain control. In this context, the manager will estimate its best action based on the likelihood of either (human, autonomous) agent's failure as a result of their sensing capabilities and possible deficiencies. We model how the environmental context can contribute to, or exacerbate, these sensing deficiencies. These contexts provide cases where the manager must learn to identify agents with capabilities that are suitable for decision-making. As such, we demonstrate how a reinforcement learning manager can correct the context-delegation association and assist the hybrid team of agents in outperforming the behavior of any agent working in isolation.

摘要

鉴于越来越多能够自主行动或增强人类活动的智能系统,考虑到由于多种因素(例如感知)导致人类、自主系统或两者都可能出现故障的情况非常重要。人类或自主代理的故障可能导致性能水平降低,也可能导致严重的后果,如受伤或死亡。对于我们的主题,我们考虑了混合人机团队合作的情况,其中管理代理负责确定何时执行委派任务以及是人类还是自主系统应该获得控制权。在这种情况下,经理将根据(人类、自主)代理由于其感知能力和可能的缺陷而失败的可能性来估计其最佳行动。我们对环境背景如何促成或加剧这些感知缺陷进行建模。这些情况为经理必须学习识别具有适合决策制定能力的代理提供了案例。因此,我们展示了强化学习经理如何纠正上下文委托关联,并帮助混合代理团队表现优于任何单独工作的代理的行为。

相似文献

1
Compensating for Sensing Failures via Delegation in Human-AI Hybrid Systems.通过在人机混合系统中委托来弥补感知故障。
Sensors (Basel). 2023 Mar 24;23(7):3409. doi: 10.3390/s23073409.
2
Folic acid supplementation and malaria susceptibility and severity among people taking antifolate antimalarial drugs in endemic areas.在流行地区,服用抗叶酸抗疟药物的人群中,叶酸补充剂与疟疾易感性和严重程度的关系。
Cochrane Database Syst Rev. 2022 Feb 1;2(2022):CD014217. doi: 10.1002/14651858.CD014217.
3
Towards reconciling usability and usefulness of policy explanations for sequential decision-making systems.调和顺序决策系统政策解释的可用性和有用性
Front Robot AI. 2024 Jul 22;11:1375490. doi: 10.3389/frobt.2024.1375490. eCollection 2024.
4
Convergence of Artificial Intelligence and Neuroscience towards the Diagnosis of Neurological Disorders-A Scoping Review.人工智能与神经科学在神经紊乱诊断中的交汇:综述
Sensors (Basel). 2023 Mar 13;23(6):3062. doi: 10.3390/s23063062.
5
Moral Decision Making in Human-Agent Teams: Human Control and the Role of Explanations.人类-智能体团队中的道德决策:人类控制与解释的作用
Front Robot AI. 2021 May 27;8:640647. doi: 10.3389/frobt.2021.640647. eCollection 2021.
6
Delegation: developing the habit.授权:养成习惯。
Radiol Manage. 2001 Jul-Aug;23(4):16-20, 22, 24.
7
Human factors considerations for the context-aware design of adaptive autonomous teammates.自适应自主队友情境感知设计中的人为因素考量
Ergonomics. 2025 Apr;68(4):571-587. doi: 10.1080/00140139.2024.2380341. Epub 2024 Jul 26.
8
Fear-Neuro-Inspired Reinforcement Learning for Safe Autonomous Driving.基于恐惧神经的强化学习在自动驾驶中的安全应用。
IEEE Trans Pattern Anal Mach Intell. 2024 Jan;46(1):267-279. doi: 10.1109/TPAMI.2023.3322426. Epub 2023 Dec 5.
9
Human-AI teams-Challenges for a team-centered AI at work.人机协作团队——工作中以团队为中心的人工智能面临的挑战。
Front Artif Intell. 2023 Sep 27;6:1252897. doi: 10.3389/frai.2023.1252897. eCollection 2023.
10
A conceptual and computational model of moral decision making in human and artificial agents.人类和人工智能主体道德决策的概念与计算模型。
Top Cogn Sci. 2010 Jul;2(3):454-85. doi: 10.1111/j.1756-8765.2010.01095.x. Epub 2010 May 13.

本文引用的文献

1
Sensor and Sensor Fusion Technology in Autonomous Vehicles: A Review.自动驾驶车辆中的传感器与传感器融合技术:综述。
Sensors (Basel). 2021 Mar 18;21(6):2140. doi: 10.3390/s21062140.
2
A Systematic Review of Perception System and Simulators for Autonomous Vehicles Research.自动驾驶车辆研究中的感知系统及模拟器的系统评价综述
Sensors (Basel). 2019 Feb 5;19(3):648. doi: 10.3390/s19030648.
3
To delegate or not to delegate: A review of control frameworks for autonomous cars.是否进行授权:自动驾驶汽车控制框架综述
Appl Ergon. 2016 Mar;53 Pt B:383-8. doi: 10.1016/j.apergo.2015.10.011. Epub 2015 Oct 29.