Suppr超能文献

人工智能生成内容的功过:四个国家的个性化影响

Credit and blame for AI-generated content: Effects of personalization in four countries.

作者信息

Earp Brian D, Porsdam Mann Sebastian, Liu Peng, Hannikainen Ivar, Khan Maryam Ali, Chu Yueying, Savulescu Julian

机构信息

Uehiro Oxford Institute, University of Oxford, Oxford, UK.

Centre for Biomedical Ethics, Yong Loo Lin School of Medicine, National University of Singapore, Singapore.

出版信息

Ann N Y Acad Sci. 2024 Dec;1542(1):51-57. doi: 10.1111/nyas.15258. Epub 2024 Nov 25.

Abstract

Generative artificial intelligence (AI) raises ethical questions concerning moral and legal responsibility-specifically, the attributions of credit and blame for AI-generated content. For example, if a human invests minimal skill or effort to produce a beneficial output with an AI tool, can the human still take credit? How does the answer change if the AI has been personalized (i.e., fine-tuned) on previous outputs produced without AI assistance by the same human? We conducted a preregistered experiment with representative sampling (N = 1802) repeated in four countries (United States, United Kingdom, China, and Singapore). We investigated laypeople's attributions of credit and blame to human users for producing beneficial or harmful outputs with a standard large language model (LLM), a personalized LLM, or no AI assistance (control condition). Participants generally attributed more credit to human users of personalized versus standard LLMs for beneficial outputs, whereas LLM type did not significantly affect blame attributions for harmful outputs, with a partial exception among Chinese participants. In addition, UK participants attributed more blame for using any type of LLM versus no LLM. Practical, ethical, and policy implications of these findings are discussed.

摘要

生成式人工智能引发了有关道德和法律责任的伦理问题,特别是对于人工智能生成内容的功劳归属和责任归咎。例如,如果一个人使用人工智能工具时只需投入极少的技能或努力就能产生有益的输出,这个人还能获得功劳吗?如果人工智能是根据同一个人之前在没有人工智能协助的情况下产生的输出进行了个性化设置(即微调),答案会如何变化?我们进行了一项预先注册的实验,采用代表性抽样(N = 1802),并在四个国家(美国、英国、中国和新加坡)重复进行。我们调查了普通民众对于使用标准大语言模型(LLM)、个性化大语言模型或无人工智能协助(对照条件)产生有益或有害输出的人类用户的功劳归属和责任归咎情况。对于有益输出,参与者通常将更多功劳归于使用个性化大语言模型的人类用户而非标准大语言模型的用户,而大语言模型的类型对有害输出的责任归咎没有显著影响,中国参与者有部分例外。此外,英国参与者认为使用任何类型的大语言模型比不使用大语言模型应承担更多责任。我们还讨论了这些发现的实际、伦理和政策意义。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d308/11668494/a621b6f28b87/NYAS-1542-51-g004.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验