Pannu Jaspreet, Bloomfield Doni, MacKnight Robert, Hanke Moritz S, Zhu Alex, Gomes Gabe, Cicero Anita, Inglesby Thomas V
Center for Health Security, Bloomberg School of Public Health, Johns Hopkins University, Baltimore, Maryland.
Department of Health Policy, Stanford School of Medicine, Stanford University, Stanford, California, United States of America.
PLoS Comput Biol. 2025 May 8;21(5):e1012975. doi: 10.1371/journal.pcbi.1012975. eCollection 2025 May.
As a result of rapidly accelerating artificial intelligence (AI) capabilities, multiple national governments and multinational bodies have launched efforts to address safety, security and ethics issues related to AI models. One high priority among these efforts is the mitigation of misuse of AI models, such as for the development of chemical, biological, nuclear or radiological (CBRN) threats. Many biologists have for decades sought to reduce the risks of scientific research that could lead, through accident or misuse, to high-consequence disease outbreaks. Scientists have carefully considered what types of life sciences research have the potential for both benefit and risk (dual use), especially as scientific advances have accelerated our ability to engineer organisms. Here we describe how previous experience and study by scientists and policy professionals of dual-use research in the life sciences can inform dual-use capabilities of AI models trained using biological data. Of these dual-use capabilities, we argue that AI model evaluations should prioritize addressing those which enable high-consequence risks (i.e., large-scale harm to the public, such as transmissible disease outbreaks that could develop into pandemics), and that these risks should be evaluated prior to model deployment so as to allow potential biosafety and/or biosecurity measures. While biological research is on balance immensely beneficial, it is well recognized that some biological information or technologies could be intentionally or inadvertently misused to cause consequential harm to the public. AI-enabled life sciences research is no different. Scientists' historical experience with identifying and mitigating dual-use biological risks can thus help inform new approaches to evaluating biological AI models. Identifying which AI capabilities pose the greatest biosecurity and biosafety concerns is necessary in order to establish targeted AI safety evaluation methods, secure these tools against accident and misuse, and avoid impeding immense potential benefits.
由于人工智能(AI)能力迅速提升,多个国家政府和跨国机构已着手应对与AI模型相关的安全、安保及伦理问题。这些工作中的一个高度优先事项是减轻AI模型被滥用的情况,例如用于开发化学、生物、核或放射性(CBRN)威胁。几十年来,许多生物学家一直致力于降低科研风险,因为这些研究可能因意外或滥用导致高后果疾病爆发。科学家们仔细考虑了哪些类型的生命科学研究具有潜在的益处和风险(两用性),特别是随着科学进步加速了我们改造生物体的能力。在此,我们描述科学家和政策专业人士以往在生命科学两用研究方面的经验和研究如何为使用生物数据训练的AI模型的两用能力提供参考。对于这些两用能力,我们认为AI模型评估应优先处理那些会带来高后果风险的能力(即对公众造成大规模伤害,例如可能演变成大流行的传染病爆发),并且这些风险应在模型部署前进行评估,以便采取潜在的生物安全和/或生物安保措施。虽然生物研究总体上极为有益,但人们普遍认识到,一些生物信息或技术可能会被有意或无意地滥用,从而对公众造成严重伤害。基于AI的生命科学研究也不例外。科学家在识别和减轻两用生物风险方面的历史经验因此有助于为评估生物AI模型的新方法提供参考。确定哪些AI能力构成最大的生物安保和生物安全问题对于建立有针对性的AI安全评估方法、确保这些工具防止意外和滥用以及避免阻碍巨大的潜在益处是必要的。