a Onera , Toulouse , France.
Ergonomics. 2014;57(3):319-31. doi: 10.1080/00140139.2013.877597. Epub 2014 Jan 21.
Analyses of aviation safety reports reveal that human-machine conflicts induced by poor automation design are remarkable precursors of accidents. A review of different crew-automation conflicting scenarios shows that they have a common denominator: the autopilot behaviour interferes with the pilot's goal regarding the flight guidance via 'hidden' mode transitions. Considering both the human operator and the machine (i.e. the autopilot or the decision functions) as agents, we propose a Petri net model of those conflicting interactions, which allows them to be detected as deadlocks in the Petri net. In order to test our Petri net model, we designed an autoflight system that was formally analysed to detect conflicting situations. We identified three conflicting situations that were integrated in an experimental scenario in a flight simulator with 10 general aviation pilots. The results showed that the conflicts that we had a-priori identified as critical had impacted the pilots' performance. Indeed, the first conflict remained unnoticed by eight participants and led to a potential collision with another aircraft. The second conflict was detected by all the participants but three of them did not manage the situation correctly. The last conflict was also detected by all the participants but provoked typical automation surprise situation as only one declared that he had understood the autopilot behaviour. These behavioural results are discussed in terms of workload and number of fired 'hidden' transitions. Eventually, this study reveals that both formal and experimental approaches are complementary to identify and assess the criticality of human-automation conflicts. Practitioner Summary: We propose a Petri net model of human-automation conflicts. An experiment was conducted with general aviation pilots performing a scenario involving three conflicting situations to test the soundness of our formal approach. This study reveals that both formal and experimental approaches are complementary to identify and assess the criticality conflicts.
航空安全报告分析表明,由自动化设计不良引起的人机冲突是事故的显著前兆。对不同的机组人员-自动化冲突场景进行回顾表明,它们有一个共同的特点:自动驾驶仪的行为通过“隐藏”模式转换干扰了飞行员对飞行引导的目标。考虑到人类操作员和机器(即自动驾驶仪或决策功能)都是代理,我们提出了一个这些冲突交互的 Petri 网模型,该模型允许将它们检测为 Petri 网中的死锁。为了测试我们的 Petri 网模型,我们设计了一个自动驾驶系统,并对其进行了形式分析,以检测冲突情况。我们确定了三个冲突情况,并将其集成到飞行模拟器中的一个实验场景中,有 10 名通用航空飞行员参与。结果表明,我们事先确定为关键的冲突情况影响了飞行员的表现。事实上,第一个冲突被 8 名参与者中的 8 名忽略了,导致与另一架飞机发生潜在碰撞。第二个冲突被所有参与者检测到,但其中 3 名没有正确处理这种情况。最后一个冲突也被所有参与者检测到,但只引发了典型的自动化惊喜情况,只有 1 名参与者表示他理解了自动驾驶仪的行为。这些行为结果是根据工作量和“隐藏”转换的数量来讨论的。最终,这项研究表明,形式和实验方法是互补的,可以识别和评估人机冲突的关键性。