Suppr超能文献

人工智能安全由谁做主?

AI safety on whose terms?

机构信息

Seth Lazar is a professor of Philosophy at the Australian National University, Canberra, Australia.

Alondra Nelson is Harold F. Linder Professor in the School of Social Science at the Institute of Advanced Study, Princeton, NJ, USA.

出版信息

Science. 2023 Jul 14;381(6654):138. doi: 10.1126/science.adi8982. Epub 2023 Jul 13.

Abstract

Rapid, widespread adoption of the latest large language models has sparked both excitement and concern about advanced artificial intelligence (AI). In response, many are looking to the field of AI safety for answers. Major AI companies are purportedly investing heavily in this young research program, even as they cut "trust and safety" teams addressing harms from current systems. Governments are taking notice too. The United Kingdom just invested £100 million in a new "Foundation Model Taskforce" and plans an AI safety summit this year. And yet, as research priorities are being set, it is already clear that the prevailing technical agenda for AI safety is inadequate to address critical questions. Only a sociotechnical approach can truly limit current and potential dangers of advanced AI.

摘要

快速、广泛地采用最新的大型语言模型引发了人们对先进人工智能(AI)的兴奋和担忧。有鉴于此,许多人希望从 AI 安全领域找到答案。据称,各大 AI 公司正在大力投资这个年轻的研究项目,尽管它们正在削减负责解决当前系统造成危害的“信任和安全”团队。各国政府也开始关注这一问题。英国刚刚投资 1 亿英镑用于新的“基础模型特别工作组”,并计划今年举行人工智能安全峰会。然而,在确定研究重点时,已经很明显,目前 AI 安全的主流技术议程不足以解决关键问题。只有社会技术方法才能真正限制先进人工智能的当前和潜在危险。

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验