Suppr超能文献

医疗保健领域大语言模型的网络安全威胁与缓解策略

Cybersecurity Threats and Mitigation Strategies for Large Language Models in Health Care.

作者信息

Akinci D'Antonoli Tugba, Tejani Ali S, Khosravi Bardia, Bluethgen Christian, Busch Felix, Bressem Keno K, Adams Lisa C, Moassefi Mana, Faghani Shahriar, Gichoya Judy Wawira

机构信息

Department of Diagnostic and Interventional Neuroradiology, University Hospital Basel, Petersgraben 4, CH-4031, Basel, Switzerland.

Department of Pediatric Radiology, University Children's Hospital Basel, Basel, Switzerland.

出版信息

Radiol Artif Intell. 2025 Jul;7(4):e240739. doi: 10.1148/ryai.240739.

Abstract

The integration of large language models (LLMs) into health care offers tremendous opportunities to improve medical practice and patient care. Besides being susceptible to biases and threats common to all artificial intelligence (AI) systems, LLMs pose unique cybersecurity risks that must be carefully evaluated before these AI models are deployed in health care. LLMs can be exploited in several ways, such as malicious attacks, privacy breaches, and unauthorized manipulation of patient data. Moreover, malicious actors could use LLMs to infer sensitive patient information from training data. Furthermore, manipulated or poisoned data fed into these models could change their results in a way that is beneficial for the malicious actors. This report presents the cybersecurity challenges posed by LLMs in health care and provides strategies for mitigation. By implementing robust security measures and adhering to best practices during the model development, training, and deployment stages, stakeholders can help minimize these risks and protect patient privacy. Computer Applications-General (Informatics), Application Domain, Large Language Models, Artificial Intelligence, Cybersecurity © RSNA, 2025.

摘要

将大语言模型(LLMs)整合到医疗保健领域为改善医疗实践和患者护理提供了巨大机遇。除了容易受到所有人工智能(AI)系统共有的偏差和威胁影响外,大语言模型还带来了独特的网络安全风险,在这些人工智能模型应用于医疗保健领域之前,必须对其进行仔细评估。大语言模型可能会以多种方式被利用,如恶意攻击、隐私泄露以及对患者数据的未经授权操纵。此外,恶意行为者可能会利用大语言模型从训练数据中推断敏感的患者信息。此外,输入这些模型的被操纵或被污染的数据可能会以对恶意行为者有利的方式改变其结果。本报告介绍了大语言模型在医疗保健领域带来的网络安全挑战,并提供了缓解策略。通过在模型开发、训练和部署阶段实施强有力的安全措施并遵循最佳实践,利益相关者可以帮助将这些风险降至最低并保护患者隐私。计算机应用-通用(信息学)、应用领域、大语言模型、人工智能、网络安全 © RSNA,2025年。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验