Department of Computer Science, Universitá di Pisa, 56124 Pisa, Italy.
Institute for Informatics and Telematics (IIT), National Research Council (CNR), 56124 Pisa, Italy.
Sensors (Basel). 2023 Mar 24;23(7):3409. doi: 10.3390/s23073409.
Given the increasing prevalence of intelligent systems capable of autonomous actions or augmenting human activities, it is important to consider scenarios in which the human, autonomous system, or both can exhibit failures as a result of one of several contributing factors (e.g., perception). Failures for either humans or autonomous agents can lead to simply a reduced performance level, or a failure can lead to something as severe as injury or death. For our topic, we consider the hybrid human-AI teaming case where a managing agent is tasked with identifying when to perform a delegated assignment and whether the human or autonomous system should gain control. In this context, the manager will estimate its best action based on the likelihood of either (human, autonomous) agent's failure as a result of their sensing capabilities and possible deficiencies. We model how the environmental context can contribute to, or exacerbate, these sensing deficiencies. These contexts provide cases where the manager must learn to identify agents with capabilities that are suitable for decision-making. As such, we demonstrate how a reinforcement learning manager can correct the context-delegation association and assist the hybrid team of agents in outperforming the behavior of any agent working in isolation.
鉴于越来越多能够自主行动或增强人类活动的智能系统,考虑到由于多种因素(例如感知)导致人类、自主系统或两者都可能出现故障的情况非常重要。人类或自主代理的故障可能导致性能水平降低,也可能导致严重的后果,如受伤或死亡。对于我们的主题,我们考虑了混合人机团队合作的情况,其中管理代理负责确定何时执行委派任务以及是人类还是自主系统应该获得控制权。在这种情况下,经理将根据(人类、自主)代理由于其感知能力和可能的缺陷而失败的可能性来估计其最佳行动。我们对环境背景如何促成或加剧这些感知缺陷进行建模。这些情况为经理必须学习识别具有适合决策制定能力的代理提供了案例。因此,我们展示了强化学习经理如何纠正上下文委托关联,并帮助混合代理团队表现优于任何单独工作的代理的行为。