Price Huw
Trinity College, University of Cambridge, Cambridge, UK.
Centre for Science and Thought, University of Bonn, Bonn, Nordrhein-Westfalen, Germany.
R Soc Open Sci. 2024 May 15;11(5):231583. doi: 10.1098/rsos.231583. eCollection 2024 May.
One of the basic principles of risk management is that we should always keep an eye on ways that things could go badly wrong, even if they seem unlikely. The more disastrous a potential failure, the more improbable it needs to be, before we can safely ignore it. This principle may seem obvious, but it is easily overlooked in public discourse about risk, even by well-qualified commentators who should certainly know better. The present piece is prompted by neglect of the principle in recent discussions about the potential existential risks of artificial intelligence. The failing is not peculiar to this case, but recent debates in this area provide some particularly stark examples of how easily the principle can be overlooked.
风险管理的基本原则之一是,我们应始终关注事情可能严重出错的方式,即使它们看似不太可能发生。潜在失败越具灾难性,在我们能够安全地忽略它之前,它发生的可能性就必须越低。这一原则看似显而易见,但在关于风险的公共讨论中却很容易被忽视,即使是那些本应更明白事理的资深评论员也不例外。本文是受近期关于人工智能潜在生存风险的讨论中对该原则的忽视所启发而写。这种失误并非此案例所特有,但该领域最近的辩论提供了一些特别鲜明的例子,说明该原则是多么容易被忽视。