Anibal James T, Huth Hannah B, Gunkel Jasmine, Gregurick Susan K, Wood Bradford J
Center for Interventional Oncology, NIH Clinical Center, National Institutes of Health (NIH), Bethesda, MD, USA.
Department of Bioethics, National Institutes of Health (NIH), Bethesda, MD, USA.
NPJ Digit Med. 2024 Nov 11;7(1):317. doi: 10.1038/s41746-024-01306-2.
In the future, large language models (LLMs) may enhance the delivery of healthcare, but there are risks of misuse. These methods may be trained to allocate resources via unjust criteria involving multimodal data - financial transactions, internet activity, social behaviors, and healthcare information. This study shows that LLMs may be biased in favor of collective/systemic benefit over the protection of individual rights and could facilitate AI-driven social credit systems.
未来,大语言模型(LLMs)可能会改善医疗保健服务的提供,但也存在被滥用的风险。这些方法可能会被训练通过涉及多模态数据(金融交易、互联网活动、社会行为和医疗保健信息)的不公正标准来分配资源。这项研究表明,大语言模型可能偏向于集体/系统利益而非个人权利保护,并且可能会促进人工智能驱动的社会信用体系。