• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

大型语言模型有说实话的法律义务吗?

Do large language models have a legal duty to tell the truth?

作者信息

Wachter Sandra, Mittelstadt Brent, Russell Chris

机构信息

Oxford Internet Institute, University of Oxford, 1 St Giles, Oxford OX1 3JS, UK.

出版信息

R Soc Open Sci. 2024 Aug 7;11(8):240197. doi: 10.1098/rsos.240197. eCollection 2024 Aug.

DOI:10.1098/rsos.240197
PMID:39113763
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC11303832/
Abstract

Careless speech is a new type of harm created by large language models (LLM) that poses cumulative, long-term risks to science, education and shared social truth in democratic societies. LLMs produce responses that are plausible, helpful and confident, but that contain factual inaccuracies, misleading references and biased information. These subtle mistruths are poised to cumulatively degrade and homogenize knowledge over time. This article examines the existence and feasibility of a legal duty for LLM providers to create models that 'tell the truth'. We argue that LLM providers should be required to mitigate careless speech and better align their models with truth through open, democratic processes. We define careless speech against 'ground truth' in LLMs and related risks including hallucinations, misinformation and disinformation. We assess the existence of truth-related obligations in EU human rights law and the Artificial Intelligence Act, Digital Services Act, Product Liability Directive and Artificial Intelligence Liability Directive. Current frameworks contain limited, sector-specific truth duties. Drawing on duties in science and academia, education, archives and libraries, and a German case in which Google was held liable for defamation caused by autocomplete, we propose a pathway to create a legal truth duty for providers of narrow- and general-purpose LLMs.

摘要

粗心言论是大语言模型(LLM)造成的一种新型危害,对民主社会中的科学、教育和共享的社会真相构成累积性的长期风险。大语言模型生成的回答看似合理、有用且自信,但却包含事实错误、误导性参考和偏见信息。随着时间的推移,这些细微的不实之处有可能会逐渐侵蚀并使知识趋于同质化。本文探讨了大语言模型提供商是否有法律义务创建 “讲真话” 的模型,以及这种义务的可行性。我们认为,应该要求大语言模型提供商减轻粗心言论的影响,并通过开放、民主的程序使其模型更好地与真相保持一致。我们界定了大语言模型中针对 “基本事实” 的粗心言论以及相关风险,包括幻觉、错误信息和虚假信息。我们评估了欧盟人权法以及《人工智能法案》《数字服务法案》《产品责任指令》和《人工智能责任指令》中与真相相关的义务是否存在。当前的框架包含有限的、特定领域的真相义务。借鉴科学与学术界、教育、档案与图书馆领域的义务,以及谷歌在一起因自动完成功能导致诽谤而被判定有责的德国案例,我们提出了一条途径,为狭义和通用大语言模型的提供商确立一项法律真相义务。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0e61/11303832/8b482c432d3a/rsos240197f04.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0e61/11303832/5b9c7c7d4d23/rsos240197f01.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0e61/11303832/1c1204f4f319/rsos240197f02.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0e61/11303832/494b71a2fa84/rsos240197f03.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0e61/11303832/8b482c432d3a/rsos240197f04.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0e61/11303832/5b9c7c7d4d23/rsos240197f01.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0e61/11303832/1c1204f4f319/rsos240197f02.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0e61/11303832/494b71a2fa84/rsos240197f03.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0e61/11303832/8b482c432d3a/rsos240197f04.jpg

相似文献

1
Do large language models have a legal duty to tell the truth?大型语言模型有说实话的法律义务吗?
R Soc Open Sci. 2024 Aug 7;11(8):240197. doi: 10.1098/rsos.240197. eCollection 2024 Aug.
2
Large Language Models and User Trust: Consequence of Self-Referential Learning Loop and the Deskilling of Health Care Professionals.大语言模型与用户信任:自我参照学习循环的后果及医疗保健专业人员的技能退化
J Med Internet Res. 2024 Apr 25;26:e56764. doi: 10.2196/56764.
3
Medical malpractice liability in large language model artificial intelligence: legal review and policy recommendations.大语言模型人工智能中的医疗事故责任:法律审查与政策建议。
J Osteopath Med. 2024 Jan 31;124(7):287-290. doi: 10.1515/jom-2023-0229. eCollection 2024 Jul 1.
4
Emotional prompting amplifies disinformation generation in AI large language models.情感提示会放大人工智能大语言模型中的虚假信息生成。
Front Artif Intell. 2025 Apr 7;8:1543603. doi: 10.3389/frai.2025.1543603. eCollection 2025.
5
Large language models as tax attorneys: a case study in legal capabilities emergence.大语言模型当税法顾问:法律能力涌现的案例研究。
Philos Trans A Math Phys Eng Sci. 2024 Apr 15;382(2270):20230159. doi: 10.1098/rsta.2023.0159. Epub 2024 Feb 26.
6
Potential of Large Language Models in Health Care: Delphi Study.大语言模型在医疗保健中的潜力:德尔菲研究。
J Med Internet Res. 2024 May 13;26:e52399. doi: 10.2196/52399.
7
Legal aspects of generative artificial intelligence and large language models in examinations and theses.生成式人工智能和大型语言模型在考试和论文中的法律问题。
GMS J Med Educ. 2024 Sep 16;41(4):Doc47. doi: 10.3205/zma001702. eCollection 2024.
8
Large Language Models in Worldwide Medical Exams: Platform Development and Comprehensive Analysis.全球医学考试中的大语言模型:平台开发与综合分析
J Med Internet Res. 2024 Dec 27;26:e66114. doi: 10.2196/66114.
9
Current safeguards, risk mitigation, and transparency measures of large language models against the generation of health disinformation: repeated cross sectional analysis.大型语言模型防范生成健康类虚假信息的现行保障措施、风险缓解措施和透明度措施:重复横断面分析。
BMJ. 2024 Mar 20;384:e078538. doi: 10.1136/bmj-2023-078538.
10
[Focus: artificial intelligence in medicine-Legal aspects of using large language models in clinical practice].[聚焦:医学中的人工智能——临床实践中使用大语言模型的法律问题]
Inn Med (Heidelb). 2025 Apr;66(4):436-441. doi: 10.1007/s00108-025-01861-0. Epub 2025 Mar 14.

引用本文的文献

1
Integrating Generative AI in Dental Education: A Scoping Review of Current Practices and Recommendations.将生成式人工智能整合到牙科教育中:当前实践与建议的范围综述
Eur J Dent Educ. 2025 May;29(2):341-355. doi: 10.1111/eje.13074. Epub 2025 Jan 31.
2
Quest for AI literacy.对人工智能素养的追求。
Nat Methods. 2024 Aug;21(8):1412-1415. doi: 10.1038/s41592-024-02369-5.

本文引用的文献

1
AI models collapse when trained on recursively generated data.当在递归生成的数据上训练 AI 模型时,模型会崩溃。
Nature. 2024 Jul;631(8022):755-759. doi: 10.1038/s41586-024-07566-y. Epub 2024 Jul 24.
2
To protect science, we must use LLMs as zero-shot translators.为了保护科学,我们必须将大语言模型用作零样本翻译器。
Nat Hum Behav. 2023 Nov;7(11):1830-1832. doi: 10.1038/s41562-023-01744-0.
3
A large-scale comparison of human-written versus ChatGPT-generated essays.人工撰写与ChatGPT生成的文章的大规模比较。
Sci Rep. 2023 Oct 30;13(1):18617. doi: 10.1038/s41598-023-45644-9.
4
How do we know how smart AI systems are?我们如何知道人工智能系统有多聪明?
Science. 2023 Jul 14;381(6654):adj5957. doi: 10.1126/science.adj5957. Epub 2023 Jul 13.
5
How AI can distort human beliefs.人工智能如何扭曲人类的信仰。
Science. 2023 Jun 23;380(6651):1222-1223. doi: 10.1126/science.adi0248. Epub 2023 Jun 22.
6
Can AI language models replace human participants?人工智能语言模型能否替代人类参与者?
Trends Cogn Sci. 2023 Jul;27(7):597-600. doi: 10.1016/j.tics.2023.04.008. Epub 2023 May 10.
7
Rethink reporting of evaluation results in AI.重新思考人工智能评估结果的报告方式。
Science. 2023 Apr 14;380(6641):136-138. doi: 10.1126/science.adf6369. Epub 2023 Apr 13.
8
Large language models and the perils of their hallucinations.大语言模型及其幻觉的风险。
Crit Care. 2023 Mar 21;27(1):120. doi: 10.1186/s13054-023-04393-x.
9
Data and its (dis)contents: A survey of dataset development and use in machine learning research.数据及其(不)内容:机器学习研究中数据集开发与使用的调查
Patterns (N Y). 2021 Nov 12;2(11):100336. doi: 10.1016/j.patter.2021.100336.
10
Accountable Artificial Intelligence: Holding Algorithms to Account.可问责的人工智能:追究算法的责任。
Public Adm Rev. 2021 Sep-Oct;81(5):825-836. doi: 10.1111/puar.13293. Epub 2020 Nov 11.