• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

范式转变?——论医疗大语言模型的伦理问题。

A paradigm shift?-On the ethics of medical large language models.

机构信息

Cluster of Excellence: "Machine Learning: New Perspectives for Science", University of Tübingen, Tübingen, Germany.

Hertie Institute for AI in Brain Health & Tübingen AI Center, Tübingen, Germany.

出版信息

Bioethics. 2024 Jun;38(5):383-390. doi: 10.1111/bioe.13283. Epub 2024 Mar 25.

DOI:10.1111/bioe.13283
PMID:38523587
Abstract

After a wave of breakthroughs in image-based medical diagnostics and risk prediction models, machine learning (ML) has turned into a normal science. However, prominent researchers are claiming that another paradigm shift in medical ML is imminent-due to most recent staggering successes of large language models-from single-purpose applications toward generalist models, driven by natural language. This article investigates the implications of this paradigm shift for the ethical debate. Focusing on issues like trust, transparency, threats of patient autonomy, responsibility issues in the collaboration of clinicians and ML models, fairness, and privacy, it will be argued that the main problems will be continuous with the current debate. However, due to functioning of large language models, the complexity of all these problems increases. In addition, the article discusses some profound challenges for the clinical evaluation of large language models and threats to the reproducibility and replicability of studies about large language models in medicine due to corporate interests.

摘要

在基于图像的医学诊断和风险预测模型方面取得了一波突破之后,机器学习(ML)已经成为了一门常规科学。然而,杰出的研究人员声称,由于最近大型语言模型在单一用途应用程序方面取得了惊人的成功,医学 ML 即将发生另一次范式转变——朝着由自然语言驱动的通用模型发展。本文探讨了这一范式转变对伦理辩论的影响。本文将重点关注信任、透明度、患者自主权受到的威胁、临床医生和 ML 模型合作中的责任问题、公平性和隐私性等问题,认为主要问题将与当前的辩论保持一致。然而,由于大型语言模型的运作,所有这些问题的复杂性都增加了。此外,本文还讨论了由于企业利益,大型语言模型的临床评估以及医学中关于大型语言模型的研究的可重复性和可复制性方面所面临的一些深远挑战。

相似文献

1
A paradigm shift?-On the ethics of medical large language models.范式转变?——论医疗大语言模型的伦理问题。
Bioethics. 2024 Jun;38(5):383-390. doi: 10.1111/bioe.13283. Epub 2024 Mar 25.
2
Prescription of Controlled Substances: Benefits and Risks管制药品的处方:益处与风险
3
Sexual Harassment and Prevention Training性骚扰与预防培训
4
The Black Book of Psychotropic Dosing and Monitoring.《精神药物剂量与监测黑皮书》
Psychopharmacol Bull. 2024 Jul 8;54(3):8-59.
5
Aspects of Genetic Diversity, Host Specificity and Public Health Significance of Single-Celled Intestinal Parasites Commonly Observed in Humans and Mostly Referred to as 'Non-Pathogenic'.人类常见且大多被称为“非致病性”的单细胞肠道寄生虫的遗传多样性、宿主特异性及公共卫生意义
APMIS. 2025 Sep;133(9):e70036. doi: 10.1111/apm.70036.
6
Stench of Errors or the Shine of Potential: The Challenge of (Ir)Responsible Use of ChatGPT in Speech-Language Pathology.错误的恶臭还是潜力的光辉:言语病理学中(不)负责任地使用ChatGPT的挑战。
Int J Lang Commun Disord. 2025 Jul-Aug;60(4):e70088. doi: 10.1111/1460-6984.70088.
7
Developing evidence-based guidelines for describing potential benefits and harms within patient information leaflets/sheets (PILs) that inform and do not cause harm (PrinciPILs).制定基于证据的指南,用于在患者信息单页/说明书(PrinciPILs)中描述潜在益处和危害,这些信息单页既能提供信息又不会造成伤害。
Health Technol Assess. 2025 Aug;29(43):1-20. doi: 10.3310/GJJH2402.
8
A systematic review of speech, language and communication interventions for children with Down syndrome from 0 to 6 years.对0至6岁唐氏综合征儿童言语、语言和沟通干预措施的系统评价。
Int J Lang Commun Disord. 2022 Mar;57(2):441-463. doi: 10.1111/1460-6984.12699. Epub 2022 Feb 22.
9
Home treatment for mental health problems: a systematic review.心理健康问题的居家治疗:一项系统综述
Health Technol Assess. 2001;5(15):1-139. doi: 10.3310/hta5150.
10
Stabilizing machine learning for reproducible and explainable results: A novel validation approach to subject-specific insights.稳定机器学习以获得可重复和可解释的结果:一种针对特定个体见解的新型验证方法。
Comput Methods Programs Biomed. 2025 Jun 21;269:108899. doi: 10.1016/j.cmpb.2025.108899.

引用本文的文献

1
It Is Not About AI, It's About Humans. Responsibility Gaps and Medical AI.这不是关于人工智能,而是关于人类。责任缺口与医疗人工智能。
J Bioeth Inq. 2025 Jun 26. doi: 10.1007/s11673-025-10423-w.
2
The ethics of ChatGPT in medicine and healthcare: a systematic review on Large Language Models (LLMs).ChatGPT在医学与医疗保健领域的伦理问题:关于大语言模型(LLMs)的系统综述
NPJ Digit Med. 2024 Jul 8;7(1):183. doi: 10.1038/s41746-024-01157-x.