Undheim Trond Arne
Stanford University, Stanford, CA, United States.
Center for International Security and Cooperation, Freeman Spogli Institute for International Studies, Stanford University, Stanford, CA, United States.
Front Bioeng Biotechnol. 2024 Feb 28;12:1359768. doi: 10.3389/fbioe.2024.1359768. eCollection 2024.
AI-enabled synthetic biology has tremendous potential but also significantly increases biorisks and brings about a new set of dual use concerns. The picture is complicated given the vast innovations envisioned to emerge by combining emerging technologies, as AI-enabled synthetic biology potentially scales up bioengineering into industrial biomanufacturing. However, the literature review indicates that goals such as maintaining a reasonable scope for innovation, or more ambitiously to foster a huge bioeconomy do not necessarily contrast with biosafety, but need to go hand in hand. This paper presents a literature review of the issues and describes emerging frameworks for policy and practice that transverse the options of command-and-control, stewardship, bottom-up, and laissez-faire governance. How to achieve early warning systems that enable prevention and mitigation of future AI-enabled biohazards from the lab, from deliberate misuse, or from the public realm, will constantly need to evolve, and adaptive, interactive approaches should emerge. Although biorisk is subject to an established governance regime, and scientists generally adhere to biosafety protocols, even experimental, but legitimate use by scientists could lead to unexpected developments. Recent advances in chatbots enabled by generative AI have revived fears that advanced biological insight can more easily get into the hands of malignant individuals or organizations. Given these sets of issues, society needs to rethink how AI-enabled synthetic biology should be governed. The suggested way to visualize the challenge at hand is whack-a-mole governance, although the emerging solutions are perhaps not so different either.
人工智能驱动的合成生物学具有巨大潜力,但也显著增加了生物风险,并引发了一系列新的两用性问题。鉴于通过结合新兴技术有望出现大量创新,情况变得复杂起来,因为人工智能驱动的合成生物学有可能将生物工程扩大到工业生物制造领域。然而,文献综述表明,诸如保持合理的创新范围,或者更雄心勃勃地促进巨大的生物经济等目标,并不一定与生物安全相矛盾,而是需要携手并进。本文对这些问题进行了文献综述,并描述了新兴的政策和实践框架,这些框架涵盖了命令与控制、管理、自下而上和自由放任治理等多种选择。如何建立预警系统,以预防和减轻未来来自实验室、蓄意滥用或公共领域的人工智能驱动的生物危害,将需要不断发展,并且应该出现适应性、交互式的方法。尽管生物风险受到既定治理制度的约束,科学家们通常也遵守生物安全协议,但即使是科学家进行的实验性但合法的使用也可能导致意外发展。生成式人工智能驱动的聊天机器人的最新进展再次引发了人们的担忧,即先进的生物学见解可能更容易落入恶意个人或组织手中。鉴于这些问题,社会需要重新思考应如何治理人工智能驱动的合成生物学。应对当前挑战的建议方法是打地鼠式治理,尽管新兴的解决方案可能也没有太大不同。