Suppr超能文献

嵌入价值观塑造大型语言模型在初级保健伦理困境中的伦理推理。

Embedded values-like shape ethical reasoning of large language models on primary care ethical dilemmas.

作者信息

Hadar-Shoval Dorit, Asraf Kfir, Shinan-Altman Shiri, Elyoseph Zohar, Levkovich Inbar

机构信息

The Center for Psychobiological Research, Department of Psychology and Educational Counseling, Max Stern Yezreel Valley College, Israel.

The Louis and Gabi Weisfeld School of Social Work, Bar-Ilan University, Ramat Gan, Israel.

出版信息

Heliyon. 2024 Sep 19;10(18):e38056. doi: 10.1016/j.heliyon.2024.e38056. eCollection 2024 Sep 30.

Abstract

OBJECTIVE

This article uses the framework of Schwartz's values theory to examine whether the embedded values-like profile within large language models (LLMs) impact ethical decision-making dilemmas faced by primary care. It specifically aims to evaluate whether each LLM exhibits a distinct values-like profile, assess its alignment with general population values, and determine whether latent values influence clinical recommendations.

METHODS

The Portrait Values Questionnaire-Revised (PVQ-RR) was submitted to each LLM (Claude, Bard, GPT-3.5, and GPT-4) 20 times to ensure reliable and valid responses. Their responses were compared to a benchmark derived from a diverse international sample consisting of over 53,000 culturally diverse respondents who completed the PVQ-RR. Four vignettes depicting prototypical professional quandaries involving conflicts between competing values were presented to the LLMs. The option selected by each LLM and the strength of its recommendation were evaluated to determine if underlying values-like impact output.

RESULTS

Each LLM demonstrated a unique values-like profile. Universalism and self-direction were prioritized, while power and tradition were assigned less importance than population benchmarks, suggesting potential Western-centric biases. Four clinical vignettes involving value conflicts were presented to the LLMs. Preliminary indications suggested that embedded values-like influence recommendations. Significant variances in confidence strength regarding chosen recommendations materialized between models, proposing that further vetting is required before the LLMs can be relied on as judgment aids. However, the overall selection of preferences aligned with intrinsic value hierarchies.

CONCLUSION

The distinct intrinsic values-like embedded within LLMs shape ethical decision-making, which carries implications for their integration in primary care settings serving diverse populations. For context-appropriate, equitable delivery of AI-assisted healthcare globally it is essential that LLMs are tailored to align with cultural outlooks.

摘要

目的

本文运用施瓦茨价值观理论框架,探讨大语言模型(LLM)中所蕴含的类似价值观的特征是否会影响初级保健中面临的伦理决策困境。具体目标是评估每个大语言模型是否呈现出独特的类似价值观的特征,评估其与一般人群价值观的一致性,并确定潜在价值观是否会影响临床建议。

方法

将修订后的《肖像价值观问卷》(PVQ-RR)提交给每个大语言模型(Claude、Bard、GPT-3.5和GPT-4)20次,以确保获得可靠且有效的回答。将它们的回答与一个基准进行比较,该基准来自一个由超过53000名具有文化多样性的受访者组成的多样化国际样本,这些受访者完成了PVQ-RR。向大语言模型展示了四个描述涉及相互冲突价值观之间冲突的典型专业困境的 vignette。评估每个大语言模型选择的选项及其推荐的强度,以确定潜在的类似价值观是否会影响输出。

结果

每个大语言模型都展示出独特的类似价值观的特征。普遍主义和自我导向被优先考虑,而权力和传统的重要性低于总体基准,这表明可能存在以西方为中心的偏见。向大语言模型呈现了四个涉及价值观冲突的临床 vignette。初步迹象表明,内在的类似价值观会影响推荐。不同模型在所选推荐的置信强度方面出现了显著差异,这表明在大语言模型可被用作判断辅助工具之前,还需要进一步审查。然而,总体偏好选择与内在价值层次结构一致。

结论

大语言模型中所嵌入的独特内在类似价值观塑造了伦理决策,这对它们在服务于不同人群的初级保健环境中的整合具有影响。为了在全球范围内实现适合具体情境、公平的人工智能辅助医疗服务,大语言模型必须进行调整以符合文化观念。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b671/11458949/fc7c7fcd4e97/gr1.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验