Kolt Noam, Shur-Ofry Michal, Cohen Reuven
Faculty of Law, Hebrew University, Jerusalem, Israel.
School of Computer Science and Engineering, Hebrew University, Jerusalem, Israel.
Patterns (N Y). 2025 Aug 1;6(8):101341. doi: 10.1016/j.patter.2025.101341. eCollection 2025 Aug 8.
The study of complex adaptive systems, pioneered in physics, biology, and the social sciences, offers important lessons for artificial intelligence (AI) governance. Contemporary AI systems and the environments in which they operate exhibit many of the properties characteristic of complex systems, including nonlinear growth patterns, emergent phenomena, and cascading effects that can lead to catastrophic failures. Complex systems science can help illuminate the features of AI that pose central challenges for policymakers, such as feedback loops induced by training AI models on synthetic data and the interconnectedness between AI systems and critical infrastructure. Drawing on insights from other domains shaped by complex systems, including public health and climate change, we examine how efforts to govern AI are marked by deep uncertainty. To contend with this challenge, we propose three desiderata for designing a set of complexity-compatible AI governance principles comprised of early and scalable intervention, adaptive institutional design, and risk thresholds calibrated to trigger timely and effective regulatory responses.
对复杂适应系统的研究发端于物理学、生物学和社会科学领域,为人工智能(AI)治理提供了重要经验教训。当代人工智能系统及其运行环境展现出许多复杂系统所特有的属性,包括非线性增长模式、涌现现象以及可能导致灾难性故障的级联效应。复杂系统科学有助于阐明人工智能的一些特征,这些特征给政策制定者带来了核心挑战,比如在合成数据上训练人工智能模型所引发的反馈回路,以及人工智能系统与关键基础设施之间的相互关联性。借鉴包括公共卫生和气候变化在内的受复杂系统影响的其他领域的见解,我们研究了人工智能治理工作如何受到深度不确定性的影响。为应对这一挑战,我们提出了三项要求,以设计一套与复杂性相适应的人工智能治理原则,包括早期且可扩展的干预、适应性制度设计以及校准风险阈值以触发及时有效的监管回应。