文献检索文档翻译深度研究
Suppr Zotero 插件Zotero 插件
邀请有礼套餐&价格历史记录

新学期,新优惠

限时优惠:9月1日-9月22日

30天高级会员仅需29元

1天体验卡首发特惠仅需5.99元

了解详情
不再提醒
插件&应用
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
高级版
套餐订阅购买积分包
AI 工具
文献检索文档翻译深度研究
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2025

利用 ChatGPT-4 从医患对话的音频记录中创建结构化的医疗记录:比较研究。

Using ChatGPT-4 to Create Structured Medical Notes From Audio Recordings of Physician-Patient Encounters: Comparative Study.

机构信息

Department of Medical Informatics and Clinical Epidemiology, Oregon Health and Sciences University, Portland, OR, United States.

出版信息

J Med Internet Res. 2024 Apr 22;26:e54419. doi: 10.2196/54419.


DOI:10.2196/54419
PMID:38648636
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC11074889/
Abstract

BACKGROUND: Medical documentation plays a crucial role in clinical practice, facilitating accurate patient management and communication among health care professionals. However, inaccuracies in medical notes can lead to miscommunication and diagnostic errors. Additionally, the demands of documentation contribute to physician burnout. Although intermediaries like medical scribes and speech recognition software have been used to ease this burden, they have limitations in terms of accuracy and addressing provider-specific metrics. The integration of ambient artificial intelligence (AI)-powered solutions offers a promising way to improve documentation while fitting seamlessly into existing workflows. OBJECTIVE: This study aims to assess the accuracy and quality of Subjective, Objective, Assessment, and Plan (SOAP) notes generated by ChatGPT-4, an AI model, using established transcripts of History and Physical Examination as the gold standard. We seek to identify potential errors and evaluate the model's performance across different categories. METHODS: We conducted simulated patient-provider encounters representing various ambulatory specialties and transcribed the audio files. Key reportable elements were identified, and ChatGPT-4 was used to generate SOAP notes based on these transcripts. Three versions of each note were created and compared to the gold standard via chart review; errors generated from the comparison were categorized as omissions, incorrect information, or additions. We compared the accuracy of data elements across versions, transcript length, and data categories. Additionally, we assessed note quality using the Physician Documentation Quality Instrument (PDQI) scoring system. RESULTS: Although ChatGPT-4 consistently generated SOAP-style notes, there were, on average, 23.6 errors per clinical case, with errors of omission (86%) being the most common, followed by addition errors (10.5%) and inclusion of incorrect facts (3.2%). There was significant variance between replicates of the same case, with only 52.9% of data elements reported correctly across all 3 replicates. The accuracy of data elements varied across cases, with the highest accuracy observed in the "Objective" section. Consequently, the measure of note quality, assessed by PDQI, demonstrated intra- and intercase variance. Finally, the accuracy of ChatGPT-4 was inversely correlated to both the transcript length (P=.05) and the number of scorable data elements (P=.05). CONCLUSIONS: Our study reveals substantial variability in errors, accuracy, and note quality generated by ChatGPT-4. Errors were not limited to specific sections, and the inconsistency in error types across replicates complicated predictability. Transcript length and data complexity were inversely correlated with note accuracy, raising concerns about the model's effectiveness in handling complex medical cases. The quality and reliability of clinical notes produced by ChatGPT-4 do not meet the standards required for clinical use. Although AI holds promise in health care, caution should be exercised before widespread adoption. Further research is needed to address accuracy, variability, and potential errors. ChatGPT-4, while valuable in various applications, should not be considered a safe alternative to human-generated clinical documentation at this time.

摘要

背景:医学文献在临床实践中起着至关重要的作用,有助于准确管理患者并促进医疗保健专业人员之间的沟通。然而,医疗记录中的不准确之处可能导致沟通失误和诊断错误。此外,文献记录的需求导致医生倦怠。尽管已经使用了医疗抄写员和语音识别软件等中介来减轻这种负担,但它们在准确性和满足特定提供者的指标方面存在局限性。集成环境人工智能 (AI) 驱动的解决方案提供了一种有前途的方法,可以在无缝融入现有工作流程的同时提高文档质量。

目的:本研究旨在评估 ChatGPT-4(一种 AI 模型)生成的主观、客观、评估和计划 (SOAP) 记录的准确性和质量,使用历史和体检的既定记录作为黄金标准。我们旨在识别潜在错误并评估模型在不同类别中的性能。

方法:我们进行了模拟的医患就诊,代表了各种门诊专业,并对音频文件进行了转录。确定了可报告的关键要素,并使用这些记录生成 ChatGPT-4 生成基于这些记录的 SOAP 记录。为每个记录创建了三个版本,并通过图表审查与黄金标准进行比较;通过比较生成的错误被归类为遗漏、信息错误或添加。我们比较了不同版本、转录长度和数据类别之间的数据元素的准确性。此外,我们使用医师文献质量工具 (PDQI) 评分系统评估记录质量。

结果:尽管 ChatGPT-4 始终如一地生成 SOAP 风格的记录,但平均每个临床病例有 23.6 个错误,遗漏错误(86%)最为常见,其次是添加错误(10.5%)和包含不正确事实(3.2%)。同一病例的重复之间存在显著差异,所有 3 个重复中只有 52.9%的数据元素报告正确。数据元素的准确性因病例而异,在“客观”部分观察到最高的准确性。因此,通过 PDQI 评估的记录质量衡量标准表现出了病例内和病例间的差异。最后,ChatGPT-4 的准确性与转录长度(P=.05)和可评分数据元素数量(P=.05)呈负相关。

结论:我们的研究揭示了 ChatGPT-4 生成的错误、准确性和记录质量存在很大差异。错误不仅限于特定部分,并且重复之间错误类型的不一致使得可预测性变得复杂。转录长度和数据复杂性与记录准确性呈负相关,这引发了对模型处理复杂医疗病例的有效性的担忧。ChatGPT-4 生成的临床记录的质量和可靠性不符合临床使用的标准。虽然人工智能在医疗保健中有很大的应用前景,但在广泛采用之前应该谨慎行事。需要进一步研究以解决准确性、可变性和潜在错误问题。ChatGPT-4 在各种应用中很有价值,但在现阶段不应被视为人类生成的临床文档的安全替代方案。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4a8c/11074889/f1a61207bf17/jmir_v26i1e54419_fig6.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4a8c/11074889/1cc4add5b8ee/jmir_v26i1e54419_fig1.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4a8c/11074889/c72764527820/jmir_v26i1e54419_fig2.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4a8c/11074889/34167861e93c/jmir_v26i1e54419_fig3.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4a8c/11074889/647bf4c09c58/jmir_v26i1e54419_fig4.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4a8c/11074889/23711e70273d/jmir_v26i1e54419_fig5.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4a8c/11074889/f1a61207bf17/jmir_v26i1e54419_fig6.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4a8c/11074889/1cc4add5b8ee/jmir_v26i1e54419_fig1.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4a8c/11074889/c72764527820/jmir_v26i1e54419_fig2.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4a8c/11074889/34167861e93c/jmir_v26i1e54419_fig3.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4a8c/11074889/647bf4c09c58/jmir_v26i1e54419_fig4.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4a8c/11074889/23711e70273d/jmir_v26i1e54419_fig5.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4a8c/11074889/f1a61207bf17/jmir_v26i1e54419_fig6.jpg

相似文献

[1]
Using ChatGPT-4 to Create Structured Medical Notes From Audio Recordings of Physician-Patient Encounters: Comparative Study.

J Med Internet Res. 2024-4-22

[2]
Evaluating the Usability, Technical Performance, and Accuracy of Artificial Intelligence Scribes for Primary Care: Competitive Analysis.

JMIR Hum Factors. 2025-7-23

[3]
Prescription of Controlled Substances: Benefits and Risks

2025-1

[4]
AI Scribes in Health Care: Balancing Transformative Potential With Responsible Integration.

JMIR Med Inform. 2025-8-1

[5]
Navigating the future of pediatric cardiovascular surgery: Insights and innovation powered by Chat Generative Pre-Trained Transformer (ChatGPT).

J Thorac Cardiovasc Surg. 2025-2-1

[6]
The educational effects of portfolios on undergraduate student learning: a Best Evidence Medical Education (BEME) systematic review. BEME Guide No. 11.

Med Teach. 2009-4

[7]
AI in Medical Questionnaires: Innovations, Diagnosis, and Implications.

J Med Internet Res. 2025-6-23

[8]
Sexual Harassment and Prevention Training

2025-1

[9]
Comparison of Two Modern Survival Prediction Tools, SORG-MLA and METSSS, in Patients With Symptomatic Long-bone Metastases Who Underwent Local Treatment With Surgery Followed by Radiotherapy and With Radiotherapy Alone.

Clin Orthop Relat Res. 2024-12-1

[10]
Documenting Care with AI: A Comparative Analysis of Commercial Scribe Tools.

Stud Health Technol Inform. 2025-8-7

引用本文的文献

[1]
The impact of an artificial intelligence enhancement program on healthcare providers' knowledge, attitudes, and workplace flourishing.

Front Public Health. 2025-8-7

[2]
Transforming Cancer Care: A Narrative Review on Leveraging Artificial Intelligence to Advance Immunotherapy in Underserved Communities.

J Clin Med. 2025-7-29

[3]
AI Scribes in Health Care: Balancing Transformative Potential With Responsible Integration.

JMIR Med Inform. 2025-8-1

[4]
Evaluating the Usability, Technical Performance, and Accuracy of Artificial Intelligence Scribes for Primary Care: Competitive Analysis.

JMIR Hum Factors. 2025-7-23

[5]
General practitioners' opinions of generative artificial intelligence in the UK: An online survey.

Digit Health. 2025-7-17

[6]
Impact of artificial intelligence on electronic health record-related burnouts among healthcare professionals: systematic review.

Front Public Health. 2025-7-3

[7]
The Impact of AI Scribes on Streamlining Clinical Documentation: A Systematic Review.

Healthcare (Basel). 2025-6-16

[8]
Improving Patient Communication by Simplifying AI-Generated Dental Radiology Reports With ChatGPT: Comparative Study.

J Med Internet Res. 2025-6-9

[9]
Artificial intelligence-driven natural language processing for identifying linguistic patterns in Alzheimer's disease and mild cognitive impairment: A study of lexical, syntactic, and cohesive features of speech through picture description tasks.

J Alzheimers Dis. 2025-7

[10]
Development and validation of the provider documentation summarization quality instrument for large language models.

J Am Med Inform Assoc. 2025-6-1

本文引用的文献

[1]
Reliability of Medical Information Provided by ChatGPT: Assessment Against Clinical Guidelines and Patient Information Quality Instrument.

J Med Internet Res. 2023-6-30

[2]
ChatGPT is not the solution to physicians' documentation burden.

Nat Med. 2023-6

[3]
What if your patient switches from Dr. Google to Dr. ChatGPT? A vignette-based survey of the trustworthiness, value, and danger of ChatGPT-generated responses to health questions.

Eur J Cardiovasc Nurs. 2024-1-12

[4]
Diagnostic Accuracy of Differential-Diagnosis Lists Generated by Generative Pretrained Transformer 3 Chatbot for Clinical Vignettes with Common Chief Complaints: A Pilot Study.

Int J Environ Res Public Health. 2023-2-15

[5]
Artificial intelligence bot ChatGPT in medical research: the potential game changer as a double-edged sword.

Knee Surg Sports Traumatol Arthrosc. 2023-4

[6]
Medical Record Closure Practices of Physicians Before and After the Use of Medical Scribes.

JAMA. 2022-10-4

[7]
Medical Documentation Burden Among US Office-Based Physicians in 2019: A National Study.

JAMA Intern Med. 2022-5-1

[8]
Comparing Scribed and Non-scribed Outpatient Progress Notes.

AMIA Annu Symp Proc. 2021

[9]
Chart Completion Time of Attending Physicians While Using Medical Scribes.

AMIA Annu Symp Proc. 2021

[10]
The future of medical scribes documenting in the electronic health record: results of an expert consensus conference.

BMC Med Inform Decis Mak. 2021-6-29

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

推荐工具

医学文档翻译智能文献检索