Schuett Jonas
Centre for the Governance of AI, Oxford, UK.
Risk Anal. 2025 Jun;45(6):1332-1352. doi: 10.1111/risa.17665. Epub 2024 Oct 21.
This article argues that frontier artificial intelligence (AI) developers need an internal audit function. First, it describes the role of internal audit in corporate governance: internal audit evaluates the adequacy and effectiveness of a company's risk management, control, and governance processes. It is organizationally independent from senior management and reports directly to the board of directors, typically its audit committee. In the Institute of Internal Auditors' Three Lines Model, internal audit serves as the third line and is responsible for providing assurance to the board, whereas the combined assurance framework highlights the need to coordinate the activities of internal and external assurance providers. Next, the article provides an overview of key governance challenges in frontier AI development: Dangerous capabilities can arise unpredictably and undetected; it is difficult to prevent a deployed model from causing harm; frontier models can proliferate rapidly; it is inherently difficult to assess frontier AI risks; and frontier AI developers do not seem to follow best practices in risk governance. Finally, the article discusses how an internal audit function could address some of these challenges: Internal audit could identify ineffective risk management practices; it could ensure that the board of directors has a more accurate understanding of the current level of risk and the adequacy of the developer's risk management practices; and it could serve as a contact point for whistleblowers. But frontier AI developers should also be aware of key limitations: Internal audit adds friction; it can be captured by senior management; and the benefits depend on the ability of individuals to identify ineffective practices. In light of rapid progress in AI research and development, frontier AI developers need to strengthen their risk governance. Instead of reinventing the wheel, they should follow existing best practices. Although this might not be sufficient, they should not skip this obvious first step.
本文认为,前沿人工智能(AI)开发者需要一个内部审计职能。首先,它描述了内部审计在公司治理中的作用:内部审计评估公司风险管理、控制和治理流程的充分性和有效性。它在组织上独立于高级管理层,直接向董事会,通常是其审计委员会报告。在内部审计师协会的三线模型中,内部审计作为第三道防线,负责向董事会提供保证,而综合保证框架强调了协调内部和外部保证提供者活动的必要性。接下来,本文概述了前沿人工智能开发中的关键治理挑战:危险能力可能不可预测地出现且未被发现;很难防止已部署的模型造成危害;前沿模型可能迅速扩散;本质上很难评估前沿人工智能风险;而且前沿人工智能开发者似乎没有遵循风险治理的最佳实践。最后,本文讨论了内部审计职能如何应对其中一些挑战:内部审计可以识别无效的风险管理实践;它可以确保董事会更准确地了解当前的风险水平以及开发者风险管理实践的充分性;并且它可以作为举报人的联络点。但是前沿人工智能开发者也应该意识到关键的局限性:内部审计会增加摩擦;它可能被高级管理层俘获;而且好处取决于个人识别无效实践的能力。鉴于人工智能研发的快速进展,前沿人工智能开发者需要加强他们的风险治理。他们不应从头开始,而应遵循现有的最佳实践。虽然这可能还不够,但他们不应跳过这一明显的第一步。