文献检索文档翻译深度研究
Suppr Zotero 插件Zotero 插件
邀请有礼套餐&价格历史记录

新学期,新优惠

限时优惠:9月1日-9月22日

30天高级会员仅需29元

1天体验卡首发特惠仅需5.99元

了解详情
不再提醒
插件&应用
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
高级版
套餐订阅购买积分包
AI 工具
文献检索文档翻译深度研究
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2025

Scientific Evidence for Clinical Text Summarization Using Large Language Models: Scoping Review.

作者信息

Bednarczyk Lydie, Reichenpfader Daniel, Gaudet-Blavignac Christophe, Ette Amon Kenna, Zaghir Jamil, Zheng Yuanyuan, Bensahla Adel, Bjelogrlic Mina, Lovis Christian

机构信息

Division of Medical Information Sciences, University Hospital of Geneva, Geneva, Switzerland.

Institute for Patient-centered Digital Health, Bern University of Applied Sciences, Biel, Switzerland.

出版信息

J Med Internet Res. 2025 May 15;27:e68998. doi: 10.2196/68998.


DOI:10.2196/68998
PMID:40371947
Abstract

BACKGROUND: Information overload in electronic health records requires effective solutions to alleviate clinicians' administrative tasks. Automatically summarizing clinical text has gained significant attention with the rise of large language models. While individual studies show optimism, a structured overview of the research landscape is lacking. OBJECTIVE: This study aims to present the current state of the art on clinical text summarization using large language models, evaluate the level of evidence in existing research and assess the applicability of performance findings in clinical settings. METHODS: This scoping review complied with the PRISMA-ScR (Preferred Reporting Items for Systematic Reviews and Meta-Analyses extension for Scoping Reviews) guidelines. Literature published between January 1, 2019, and June 18, 2024, was identified from 5 databases: PubMed, Embase, Web of Science, IEEE Xplore, and ACM Digital Library. Studies were excluded if they did not describe transformer-based models, did not focus on clinical text summarization, did not engage with free-text data, were not original research, were nonretrievable, were not peer-reviewed, or were not in English, French, Spanish, or German. Data related to study context and characteristics, scope of research, and evaluation methodologies were systematically collected and analyzed by 3 authors independently. RESULTS: A total of 30 original studies were included in the analysis. All used observational retrospective designs, mainly using real patient data (n=28, 93%). The research landscape demonstrated a narrow research focus, often centered on summarizing radiology reports (n=17, 57%), primarily involving data from the intensive care unit (n=15, 50%) of US-based institutions (n=19, 73%), in English (n=26, 87%). This focus aligned with the frequent reliance on the open-source Medical Information Mart for Intensive Care dataset (n=15, 50%). Summarization methodologies predominantly involved abstractive approaches (n=17, 57%) on single-document inputs (n=4, 13%) with unstructured data (n=13, 43%), yet reporting on methodological details remained inconsistent across studies. Model selection involved both open-source models (n=26, 87%) and proprietary models (n=7, 23%). Evaluation frameworks were highly heterogeneous. All studies conducted internal validation, but external validation (n=2, 7%), failure analysis (n=6, 20%), and patient safety risks analysis (n=1, 3%) were infrequent, and none reported bias assessment. Most studies used both automated metrics and human evaluation (n=16, 53%), while 10 (33%) used only automated metrics, and 4 (13%) only human evaluation. CONCLUSIONS: Key barriers hinder the translation of current research into trustworthy, clinically valid applications. Current research remains exploratory and limited in scope, with many applications yet to be explored. Performance assessments often lack reliability, and clinical impact evaluations are insufficient raising concerns about model utility, safety, fairness, and data privacy. Advancing the field requires more robust evaluation frameworks, a broader research scope, and a stronger focus on real-world applicability.

摘要

相似文献

[1]
Scientific Evidence for Clinical Text Summarization Using Large Language Models: Scoping Review.

J Med Internet Res. 2025-5-15

[2]
Folic acid supplementation and malaria susceptibility and severity among people taking antifolate antimalarial drugs in endemic areas.

Cochrane Database Syst Rev. 2022-2-1

[3]
Beyond the black stump: rapid reviews of health research issues affecting regional, rural and remote Australia.

Med J Aust. 2020-12

[4]
Natural Language Processing for Work-Related Stress Detection Among Health Professionals: Protocol for a Scoping Review.

JMIR Res Protoc. 2024-5-15

[5]
Large Language Model Applications for Health Information Extraction in Oncology: Scoping Review.

JMIR Cancer. 2025-3-28

[6]
The future of Cochrane Neonatal.

Early Hum Dev. 2020-11

[7]
Use of SNOMED CT in Large Language Models: Scoping Review.

JMIR Med Inform. 2024-10-7

[8]
Patient Information Summarization in Clinical Settings: Scoping Review.

JMIR Med Inform. 2023-11-28

[9]
Applying AI to Structured Real-World Data for Pharmacovigilance Purposes: Scoping Review.

J Med Internet Res. 2024-12-30

[10]
Large Language Models for Mental Health Applications: Systematic Review.

JMIR Ment Health. 2024-10-18

本文引用的文献

[1]
A framework to assess clinical safety and hallucination rates of LLMs for medical text summarisation.

NPJ Digit Med. 2025-5-13

[2]
The TRIPOD-LLM reporting guideline for studies using large language models.

Nat Med. 2025-1

[3]
Applications and Concerns of ChatGPT and Other Conversational Large Language Models in Health Care: Systematic Review.

J Med Internet Res. 2024-11-7

[4]
Testing and Evaluation of Health Care Applications of Large Language Models: A Systematic Review.

JAMA. 2025-1-28

[5]
A framework for human evaluation of large language models in healthcare derived from literature review.

NPJ Digit Med. 2024-9-28

[6]
A toolbox for surfacing health equity harms and biases in large language models.

Nat Med. 2024-12

[7]
The testing of AI in medicine is a mess. Here's how it should be done.

Nature. 2024-8

[8]
From text to treatment: the crucial role of validation for generative large language models in health care.

Lancet Digit Health. 2024-7

[9]
Clinical and Surgical Applications of Large Language Models: A Systematic Review.

J Clin Med. 2024-5-22

[10]
Applying generative AI with retrieval augmented generation to summarize and extract key clinical information from electronic health records.

J Biomed Inform. 2024-8

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

推荐工具

医学文档翻译智能文献检索