Suppr超能文献

大型语言模型和临床信用系统的模拟滥用情况。

Simulated misuse of large language models and clinical credit systems.

作者信息

Anibal James T, Huth Hannah B, Gunkel Jasmine, Gregurick Susan K, Wood Bradford J

机构信息

Center for Interventional Oncology, NIH Clinical Center, National Institutes of Health (NIH), Bethesda, MD, USA.

Department of Bioethics, National Institutes of Health (NIH), Bethesda, MD, USA.

出版信息

NPJ Digit Med. 2024 Nov 11;7(1):317. doi: 10.1038/s41746-024-01306-2.

Abstract

In the future, large language models (LLMs) may enhance the delivery of healthcare, but there are risks of misuse. These methods may be trained to allocate resources via unjust criteria involving multimodal data - financial transactions, internet activity, social behaviors, and healthcare information. This study shows that LLMs may be biased in favor of collective/systemic benefit over the protection of individual rights and could facilitate AI-driven social credit systems.

摘要

未来,大语言模型(LLMs)可能会改善医疗保健服务的提供,但也存在被滥用的风险。这些方法可能会被训练通过涉及多模态数据(金融交易、互联网活动、社会行为和医疗保健信息)的不公正标准来分配资源。这项研究表明,大语言模型可能偏向于集体/系统利益而非个人权利保护,并且可能会促进人工智能驱动的社会信用体系。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2231/11554647/18dd2595105e/41746_2024_1306_Fig1_HTML.jpg

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验