van der Waa Jasper, Verdult Sabine, van den Bosch Karel, van Diggelen Jurriaan, Haije Tjalling, van der Stigchel Birgit, Cocu Ioana
Perceptual and Cognitive Systems, TNO, Soesterberg, Netherlands.
Interactive Intelligence, Technical University Delft, Delft, Netherlands.
Front Robot AI. 2021 May 27;8:640647. doi: 10.3389/frobt.2021.640647. eCollection 2021.
With the progress of Artificial Intelligence, intelligent agents are increasingly being deployed in tasks for which ethical guidelines and moral values apply. As artificial agents do not have a legal position, humans should be held accountable if actions do not comply, implying humans need to exercise control. This is often labeled as (MHC). In this paper, achieving MHC is addressed as a design problem, defining the collaboration between humans and agents. We propose three possible team designs (Team Design Patterns), varying in the level of autonomy on the agent's part. The team designs include explanations given by the agent to clarify its reasoning and decision-making. The designs were implemented in a simulation of a medical triage task, to be executed by a domain expert and an artificial agent. The triage task simulates making decisions under time pressure, with too few resources available to comply with all medical guidelines all the time, hence involving moral choices. Domain experts (i.e., health care professionals) participated in the present study. One goal was to assess the ecological relevance of the simulation. Secondly, to explore the control that the human has over the agent to warrant moral compliant behavior in each proposed team design. Thirdly, to evaluate the role of agent explanations on the human's understanding in the agent's reasoning. Results showed that the experts overall found the task a believable simulation of what might occur in reality. Domain experts experienced control over the team's moral compliance when consequences were quickly noticeable. When instead the consequences emerged much later, the experts experienced less control and felt less responsible. Possibly due to the experienced time pressure implemented in the task or over trust in the agent, the experts did not use explanations much during the task; when asked afterwards they however considered these to be useful. It is concluded that a team design should emphasize and support the human to develop a sense of responsibility for the agent's behavior and for the team's decisions. The design should include explanations that fit with the assigned team roles as well as the human cognitive state.
随着人工智能的发展,智能代理越来越多地被部署到需要遵循道德准则和道德价值观的任务中。由于智能代理没有法律地位,如果行为不符合规定,人类应承担责任,这意味着人类需要进行控制。这通常被称为人类控制(MHC)。在本文中,实现人类控制被作为一个设计问题来探讨,定义了人类与代理之间的协作。我们提出了三种可能的团队设计(团队设计模式),代理的自主程度各不相同。团队设计包括代理给出的解释,以阐明其推理和决策过程。这些设计在一个医疗分诊任务的模拟中得以实现,由一名领域专家和一个人工代理执行。分诊任务模拟在时间压力下做出决策,资源太少以至于无法始终遵循所有医疗准则,因此涉及道德选择。领域专家(即医疗保健专业人员)参与了本研究。一个目标是评估模拟的生态相关性。其次,探索人类对代理的控制,以确保在每个提议的团队设计中行为符合道德规范。第三,评估代理解释在人类对代理推理理解方面的作用。结果表明,专家总体上认为该任务是对现实中可能发生情况的可信模拟。当后果很快显现时,领域专家能够对团队的道德合规进行控制。相反,当后果出现得晚得多时,专家们感觉控制较少且责任感较低。可能是由于任务中设定的时间压力或对代理的过度信任,专家们在任务过程中不太使用解释;然而在任务结束后被问及此事时,他们认为这些解释是有用的。结论是,团队设计应强调并支持人类培养对代理行为和团队决策的责任感。设计应包括与分配的团队角色以及人类认知状态相匹配的解释。