Centre for Human Factors and Sociotechnical Systems, University of the Sunshine Coast, Australia.
Centre for Human Factors and Sociotechnical Systems, University of the Sunshine Coast, Australia.
Appl Ergon. 2024 May;117:104245. doi: 10.1016/j.apergo.2024.104245. Epub 2024 Feb 5.
There are concerns that Artificial General Intelligence (AGI) could pose an existential threat to humanity; however, as AGI does not yet exist it is difficult to prospectively identify risks and develop requisite controls. We applied the Work Domain Analysis Broken Nodes (WDA-BN) and Event Analysis of Systemic Teamwork-Broken Links (EAST-BL) methods to identify potential risks in a future 'envisioned world' AGI-based uncrewed combat aerial vehicle system. The findings suggest five main categories of risk in this context: sub-optimal performance risks, goal alignment risks, super-intelligence risks, over-control risks, and enfeeblement risks. Two of these categories, goal alignment risks and super-intelligence risks, have not previously been encountered or dealt with in conventional safety management systems. Whereas most of the identified sub-optimal performance risks can be managed through existing defence design lifecycle processes, we propose that work is required to develop controls to manage the other risks identified. These include controls on AGI developers, controls within the AGI itself, and broader sociotechnical system controls.
人们担心通用人工智能(AGI)可能对人类构成生存威胁;然而,由于 AGI 尚未出现,因此很难前瞻性地识别风险并开发必要的控制措施。我们应用了工作域分析故障节点(WDA-BN)和系统团队工作故障链路事件分析(EAST-BL)方法,以确定未来基于 AGI 的无人战斗空中车辆系统的“设想世界”中的潜在风险。研究结果表明,在这种情况下存在五类主要风险:性能不佳风险、目标一致性风险、超级智能风险、过度控制风险和弱化风险。这两个类别,目标一致性风险和超级智能风险,在传统的安全管理系统中以前没有遇到过或处理过。虽然大多数识别出的性能不佳风险可以通过现有的防御设计生命周期过程来管理,但我们建议需要开展工作来开发管理其他已识别风险的控制措施。这些措施包括对 AGI 开发者的控制、AGI 本身内的控制以及更广泛的社会技术系统控制。