Suppr超能文献

医学领域审慎使用大语言模型的危害降低策略:患者与临床医生的视角

Harm Reduction Strategies for Thoughtful Use of Large Language Models in the Medical Domain: Perspectives for Patients and Clinicians.

作者信息

Moëll Birger, Sand Aronsson Fredrik

机构信息

Division of Speech, Music and Hearing, School of Electrical Engineering and Computer Science, KTH Royal Institute of Technology, Lindstedsvägen 24, Stockholm, 114 28, Sweden, 46 704851893.

Department of Clinical Science, Intervention and Technology, Division of Speech and Language Pathology, Karolinska Institutet, Stockholm, Sweden.

出版信息

J Med Internet Res. 2025 Jul 25;27:e75849. doi: 10.2196/75849.

Abstract

The integration of large language models (LLMs) into health care presents significant risks to patients and clinicians, inadequately addressed by current guidance. This paper adapts harm reduction principles from public health to medical LLMs, proposing a structured framework for mitigating these domain-specific risks while maximizing ethical utility. We outline tailored strategies for patients, emphasizing critical health literacy and output verification, and for clinicians, enforcing "human-in-the-loop" validation and bias-aware workflows. Key innovations include developing thoughtful use protocols that position LLMs as assistive tools requiring mandatory verification, establishing actionable institutional policies with risk-stratified deployment guidelines and patient disclaimers, and critically analyzing underaddressed regulatory, equity, and safety challenges. This research moves beyond theory to offer a practical roadmap, enabling stakeholders to ethically harness LLMs, balance innovation with accountability, and preserve core medical values: patient safety, equity, and trust in high-stakes health care settings.

摘要

将大语言模型(LLMs)整合到医疗保健中给患者和临床医生带来了重大风险,而当前的指南并未充分解决这些问题。本文将公共卫生领域的减少伤害原则应用于医学大语言模型,提出了一个结构化框架,以减轻这些特定领域的风险,同时最大化道德效用。我们概述了针对患者的定制策略,强调关键的健康素养和结果验证,以及针对临床医生的策略,实施“人工介入”验证和有偏见意识的工作流程。关键创新包括制定深思熟虑的使用协议,将大语言模型定位为需要强制验证的辅助工具,建立具有风险分层部署指南和患者免责声明的可操作机构政策,并批判性地分析未得到充分解决的监管、公平和安全挑战。这项研究超越了理论,提供了一个实用的路线图,使利益相关者能够在道德上利用大语言模型,在创新与问责之间取得平衡,并在高风险医疗环境中维护核心医疗价值观:患者安全、公平和信任。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4e81/12296254/d2cd1bc4672c/jmir-v27-e75849-g001.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验