Suppr超能文献

大型语言模型有说实话的法律义务吗?

Do large language models have a legal duty to tell the truth?

作者信息

Wachter Sandra, Mittelstadt Brent, Russell Chris

机构信息

Oxford Internet Institute, University of Oxford, 1 St Giles, Oxford OX1 3JS, UK.

出版信息

R Soc Open Sci. 2024 Aug 7;11(8):240197. doi: 10.1098/rsos.240197. eCollection 2024 Aug.

Abstract

Careless speech is a new type of harm created by large language models (LLM) that poses cumulative, long-term risks to science, education and shared social truth in democratic societies. LLMs produce responses that are plausible, helpful and confident, but that contain factual inaccuracies, misleading references and biased information. These subtle mistruths are poised to cumulatively degrade and homogenize knowledge over time. This article examines the existence and feasibility of a legal duty for LLM providers to create models that 'tell the truth'. We argue that LLM providers should be required to mitigate careless speech and better align their models with truth through open, democratic processes. We define careless speech against 'ground truth' in LLMs and related risks including hallucinations, misinformation and disinformation. We assess the existence of truth-related obligations in EU human rights law and the Artificial Intelligence Act, Digital Services Act, Product Liability Directive and Artificial Intelligence Liability Directive. Current frameworks contain limited, sector-specific truth duties. Drawing on duties in science and academia, education, archives and libraries, and a German case in which Google was held liable for defamation caused by autocomplete, we propose a pathway to create a legal truth duty for providers of narrow- and general-purpose LLMs.

摘要

粗心言论是大语言模型(LLM)造成的一种新型危害,对民主社会中的科学、教育和共享的社会真相构成累积性的长期风险。大语言模型生成的回答看似合理、有用且自信,但却包含事实错误、误导性参考和偏见信息。随着时间的推移,这些细微的不实之处有可能会逐渐侵蚀并使知识趋于同质化。本文探讨了大语言模型提供商是否有法律义务创建 “讲真话” 的模型,以及这种义务的可行性。我们认为,应该要求大语言模型提供商减轻粗心言论的影响,并通过开放、民主的程序使其模型更好地与真相保持一致。我们界定了大语言模型中针对 “基本事实” 的粗心言论以及相关风险,包括幻觉、错误信息和虚假信息。我们评估了欧盟人权法以及《人工智能法案》《数字服务法案》《产品责任指令》和《人工智能责任指令》中与真相相关的义务是否存在。当前的框架包含有限的、特定领域的真相义务。借鉴科学与学术界、教育、档案与图书馆领域的义务,以及谷歌在一起因自动完成功能导致诽谤而被判定有责的德国案例,我们提出了一条途径,为狭义和通用大语言模型的提供商确立一项法律真相义务。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0e61/11303832/5b9c7c7d4d23/rsos240197f01.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验