Suppr超能文献

健康类虚假信息用例凸显了人工智能监管的迫切需求:大规模虚假信息的武器。

Health Disinformation Use Case Highlighting the Urgent Need for Artificial Intelligence Vigilance: Weapons of Mass Disinformation.

机构信息

Discipline of Clinical Pharmacology, College of Medicine and Public Health, Flinders University, Adelaide, Australia.

出版信息

JAMA Intern Med. 2024 Jan 1;184(1):92-96. doi: 10.1001/jamainternmed.2023.5947.

Abstract

IMPORTANCE

Although artificial intelligence (AI) offers many promises across modern medicine, it may carry a significant risk for the mass generation of targeted health disinformation. This poses an urgent threat toward public health initiatives and calls for rapid attention by health care professionals, AI developers, and regulators to ensure public safety.

OBSERVATIONS

As an example, using a single publicly available large-language model, within 65 minutes, 102 distinct blog articles were generated that contained more than 17 000 words of disinformation related to vaccines and vaping. Each post was coercive and targeted at diverse societal groups, including young adults, young parents, older persons, pregnant people, and those with chronic health conditions. The blogs included fake patient and clinician testimonials and obeyed prompting for the inclusion of scientific-looking referencing. Additional generative AI tools created an accompanying 20 realistic images in less than 2 minutes. This process was undertaken by health care professionals and researchers with no specialized knowledge in bypassing AI guardrails, relying solely on publicly available information.

CONCLUSIONS AND RELEVANCE

These observations demonstrate that when the guardrails of AI tools are insufficient, the ability to rapidly generate diverse and large amounts of convincing disinformation is profound. Beyond providing 2 example scenarios, these findings demonstrate an urgent need for robust AI vigilance. The AI tools are rapidly progressing; alongside these advancements, emergent risks are becoming increasingly apparent. Key pillars of pharmacovigilance-including transparency, surveillance, and regulation-may serve as valuable examples for managing these risks and safeguarding public health.

摘要

重要性

尽管人工智能(AI)在现代医学中提供了许多承诺,但它可能会对有针对性的健康虚假信息的大规模生成带来重大风险。这对公共卫生计划构成了紧迫威胁,呼吁医疗保健专业人员、人工智能开发者和监管机构迅速关注,以确保公共安全。

观察结果

例如,使用一个单一的公共可用的大型语言模型,在 65 分钟内生成了 102 篇独特的博客文章,这些文章包含了超过 17000 字与疫苗和蒸气有关的虚假信息。每篇文章都具有强制性,并针对包括年轻人、年轻父母、老年人、孕妇和患有慢性健康问题的人在内的不同社会群体。这些博客包括虚假的患者和临床医生的见证,并遵守了包含看起来科学的参考资料的提示。其他生成式人工智能工具在不到 2 分钟的时间内创建了 20 个逼真的图像。这个过程是由没有绕过人工智能护栏的专业知识的医疗保健专业人员和研究人员进行的,他们仅依赖于公开信息。

结论和相关性

这些观察结果表明,当人工智能工具的护栏不足时,快速生成多样化和大量逼真的虚假信息的能力是巨大的。除了提供 2 个示例场景外,这些发现还表明需要对人工智能进行强有力的监督。人工智能工具正在迅速发展;随着这些进展,新兴风险变得越来越明显。药物警戒的关键支柱,包括透明度、监测和监管,可能成为管理这些风险和保护公共卫生的有价值的例子。

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验