Suppr超能文献

生成式人工智能撰写开放式病历:对GPT 3.5和GPT 4.0功能的混合方法评估

Generative artificial intelligence writing open notes: A mixed methods assessment of the functionality of GPT 3.5 and GPT 4.0.

作者信息

Kharko Anna, McMillan Brian, Hagström Josefin, Muli Irene, Davidge Gail, Hägglund Maria, Blease Charlotte

机构信息

Participatory eHealth and Health Data Research Group, Department of Women's and Children's Health, Uppsala University, Uppsala, Sweden.

Medtech Science & Innovation Centre, Uppsala University Hospital, Uppsala, Sweden.

出版信息

Digit Health. 2024 Oct 29;10:20552076241291384. doi: 10.1177/20552076241291384. eCollection 2024 Jan-Dec.

Abstract

BACKGROUND

Worldwide, patients are increasingly being offered access to their full online clinical records including the narrative reports written by clinicians (so-called "open notes"). Against these developments, there is growing interest in the use of generative artificial intelligence (AI) such as OpenAI's ChatGPT to co-assist clinicians with patient-facing documentation.

OBJECTIVE

This study aimed to explore the effectiveness of OpenAI's ChatGPT 3.5 and GPT 4.0 in generating three patient-facing clinical notes from fictional general practice narrative reports.

METHODS

On 1 October 2023 and 1 November 2023, we used ChatGPT 3.5 and 4.0 to generate notes for three validated fictional general practice notes, using a prompt in the style of a British primary care note for three commonly presented conditions: (1) type 2 diabetes, (2) major depressive disorder, and (3) a differential diagnosis for suspected bowel cancer. Outputs were analyzed for reading ease, sentiment analysis, empathy, and medical fidelity.

RESULTS

ChatGPT 3.5 and 4.0 wrote longer notes than the original, and embedded more second person pronouns, with ChatGPT 3.5 scoring higher on both. ChatGPT expanded abbreviations, but readability metrics showed that the notes required a higher reading proficiency, with ChatGPT 3.5 demanding the most advanced level. Across all notes, ChatGPT offered higher signatures of empathy across cognitive, compassion/sympathy, and prosocial cues. Medical fidelity ratings varied across all three cases with ChatGPT 4.0 rated superior.

CONCLUSIONS

While ChatGPT improved sentiment and empathy metrics in the transformed notes, compared to the original they also required higher reading proficiency and omitted details impacting medical fidelity.

摘要

背景

在全球范围内,患者越来越能够获取自己完整的在线临床记录,包括临床医生撰写的叙述性报告(即所谓的“开放病历”)。针对这些发展趋势,人们越来越关注使用生成式人工智能(AI),如OpenAI的ChatGPT,来协助临床医生进行面向患者的文档记录。

目的

本研究旨在探讨OpenAI的ChatGPT 3.5和GPT 4.0从虚构的全科医疗叙述性报告中生成三份面向患者的临床记录的有效性。

方法

在2023年10月1日和2023年11月1日,我们使用ChatGPT 3.5和4.0为三份经过验证的虚构全科医疗记录生成记录,使用英国初级保健记录风格的提示语,针对三种常见病症:(1)2型糖尿病,(2)重度抑郁症,以及(3)疑似肠癌的鉴别诊断。对输出内容进行易读性分析、情感分析、同理心分析和医学准确性分析。

结果

ChatGPT 3.5和4.0生成的记录比原文更长,且嵌入了更多第二人称代词,ChatGPT 3.5在这两方面得分更高。ChatGPT扩展了缩写,但可读性指标显示这些记录需要更高的阅读水平,ChatGPT 3.5要求的阅读水平最高。在所有记录中,ChatGPT在认知、同情/怜悯和亲社会线索方面表现出更高的同理心特征。在所有三个病例中,医学准确性评分各不相同,ChatGPT 4.0的评分更高。

结论

虽然ChatGPT在转换后的记录中改善了情感和同理心指标,但与原文相比,它们也需要更高的阅读水平,并且遗漏了影响医学准确性的细节。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5863/11528788/a52f174f49a3/10.1177_20552076241291384-fig1.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验