文献检索文档翻译深度研究
Suppr Zotero 插件Zotero 插件
邀请有礼套餐&价格历史记录

新学期,新优惠

限时优惠:9月1日-9月22日

30天高级会员仅需29元

1天体验卡首发特惠仅需5.99元

了解详情
不再提醒
插件&应用
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
高级版
套餐订阅购买积分包
AI 工具
文献检索文档翻译深度研究
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2025

比较人工智能撰写的与临床医生撰写的模拟初级保健电子健康记录摘要。

Comparing artificial intelligence- vs clinician-authored summaries of simulated primary care electronic health records.

作者信息

Shemtob Lara, Nouri Abdullah, Harvey-Sullivan Adam, Qiu Connor S, Martin Jonathan, Martin Martha, Noden Sara, Rob Tanveer, Neves Ana L, Majeed Azeem, Clarke Jonathan, Beaney Thomas

机构信息

Department of Primary Care and Public Health, Imperial College London, London W12 0BZ, United Kingdom.

St Andrews Health Centre, London E3 3FF, United Kingdom.

出版信息

JAMIA Open. 2025 Jul 30;8(4):ooaf082. doi: 10.1093/jamiaopen/ooaf082. eCollection 2025 Aug.


DOI:10.1093/jamiaopen/ooaf082
PMID:40741008
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC12309840/
Abstract

OBJECTIVE: To compare clinical summaries generated from simulated patient primary care electronic health records (EHRs) by GPT-4, to summaries generated by clinicians on multiple domains of quality including utility, concision, accuracy, and bias. MATERIALS AND METHODS: Seven primary care physicians generated 70 simulated patient EHR notes, each representing 10 patient contacts with the practice over at least 2 years. Each record was summarized by a different clinician and by GPT-4. artificial intelligence (AI)- and clinician-authored summaries were rated blind by clinicians according to 8 domains of quality and an overall rating. RESULTS: The median time taken for a clinician to read through and assimilate the information in the EHRs before summarizing, was 7 minutes. Clinicians rated clinician-authored summaries higher than AI-authored summaries overall (7.39 vs 7.00 out of 10;  = .02), but with greater variability in clinician-authored summary ratings. AI and clinician-authored summaries had similar accuracy and AI-authored summaries were less likely to omit important information and more likely to use patient-friendly language. DISCUSSION: Although AI-authored summaries were rated slightly lower overall compared with clinician-authored summaries, they demonstrated similar accuracy and greater consistency. This demonstrates potential applications for generating summaries in primary care, particularly given the substantial time taken for clinicians to undertake this work. CONCLUSION: The results suggest the feasibility, utility and acceptability of using AI-authored summaries to integrate into EHRs to support clinicians in primary care. AI summarization tools have the potential to improve healthcare productivity, including by enabling clinicians to spend more time on direct patient care.

摘要

目的:比较GPT-4从模拟患者初级保健电子健康记录(EHR)生成的临床总结与临床医生在包括实用性、简洁性、准确性和偏差在内的多个质量领域生成的总结。 材料与方法:七名初级保健医生生成了70份模拟患者EHR记录,每份记录代表患者与该医疗机构至少两年内的10次接触。每份记录分别由不同的临床医生和GPT-4进行总结。临床医生对人工智能(AI)生成的总结和临床医生生成的总结进行盲法评分,评分依据8个质量领域和一个总体评分。 结果:临床医生在总结前通读并吸收EHR信息所需的中位时间为7分钟。总体而言,临床医生对临床医生生成的总结的评分高于AI生成的总结(10分制下分别为7.39分和7.00分;P = 0.02),但临床医生生成的总结评分的变异性更大。AI生成的总结和临床医生生成的总结准确性相似,且AI生成的总结更不容易遗漏重要信息,更有可能使用患者友好型语言。 讨论:尽管与临床医生生成的总结相比,AI生成的总结总体评分略低,但它们显示出相似的准确性和更高的一致性。这表明在初级保健中生成总结具有潜在应用,特别是考虑到临床医生开展这项工作需要大量时间。 结论:结果表明使用AI生成的总结整合到EHR中以支持初级保健临床医生的可行性、实用性和可接受性。AI总结工具有可能提高医疗保健生产力,包括使临床医生能够将更多时间用于直接的患者护理。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ad73/12309840/786cd5add260/ooaf082f5.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ad73/12309840/81b553192cc9/ooaf082f1.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ad73/12309840/c356ef494e54/ooaf082f2.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ad73/12309840/4bc5a1f960f5/ooaf082f3.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ad73/12309840/fe2b8d51e1d1/ooaf082f4.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ad73/12309840/786cd5add260/ooaf082f5.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ad73/12309840/81b553192cc9/ooaf082f1.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ad73/12309840/c356ef494e54/ooaf082f2.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ad73/12309840/4bc5a1f960f5/ooaf082f3.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ad73/12309840/fe2b8d51e1d1/ooaf082f4.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ad73/12309840/786cd5add260/ooaf082f5.jpg

相似文献

[1]
Comparing artificial intelligence- vs clinician-authored summaries of simulated primary care electronic health records.

JAMIA Open. 2025-7-30

[2]
Improving Large Language Models' Summarization Accuracy by Adding Highlights to Discharge Notes: Comparative Evaluation.

JMIR Med Inform. 2025-7-24

[3]
Falls prevention interventions for community-dwelling older adults: systematic review and meta-analysis of benefits, harms, and patient values and preferences.

Syst Rev. 2024-11-26

[4]
Signs and symptoms to determine if a patient presenting in primary care or hospital outpatient settings has COVID-19.

Cochrane Database Syst Rev. 2022-5-20

[5]
Evaluating Large Language Models for Drafting Emergency Department Discharge Summaries.

medRxiv. 2024-4-4

[6]
Utility of Generative Artificial Intelligence for Japanese Medical Interview Training: Randomized Crossover Pilot Study.

JMIR Med Educ. 2025-8-1

[7]
The potential of Generative Pre-trained Transformer 4 (GPT-4) to analyse medical notes in three different languages: a retrospective model-evaluation study.

Lancet Digit Health. 2025-1

[8]
Artificial Intelligence to Improve Clinical Coding Practice in Scandinavia: Crossover Randomized Controlled Trial.

J Med Internet Res. 2025-7-3

[9]
AI Scribes in Health Care: Balancing Transformative Potential With Responsible Integration.

JMIR Med Inform. 2025-8-1

[10]
The diagnostic and triage accuracy of the GPT-3 artificial intelligence model: an observational study.

Lancet Digit Health. 2024-8

本文引用的文献

[1]
A comparative study of recent large language models on generating hospital discharge summaries for lung cancer patients.

J Biomed Inform. 2025-8

[2]
VaxBot-HPV: a GPT-based chatbot for answering HPV vaccine-related questions.

JAMIA Open. 2025-2-19

[3]
Toward expert-level medical question answering with large language models.

Nat Med. 2025-3

[4]
Expert evaluation of large language models for clinical dialogue summarization.

Sci Rep. 2025-1-7

[5]
Applications and Concerns of ChatGPT and Other Conversational Large Language Models in Health Care: Systematic Review.

J Med Internet Res. 2024-11-7

[6]
Applying generative AI with retrieval augmented generation to summarize and extract key clinical information from electronic health records.

J Biomed Inform. 2024-8

[7]
RefAI: a GPT-powered retrieval-augmented generative tool for biomedical literature recommendation and summarization.

J Am Med Inform Assoc. 2024-9-1

[8]
Evaluation of large language models performance against humans for summarizing MRI knee radiology reports: A feasibility study.

Int J Med Inform. 2024-7

[9]
Generative Artificial Intelligence to Transform Inpatient Discharge Summaries to Patient-Friendly Language and Format.

JAMA Netw Open. 2024-3-4

[10]
Bridging the equity gap towards inclusive artificial intelligence in healthcare diagnostics.

BMJ. 2024-2-29

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

推荐工具

医学文档翻译智能文献检索