Baxter Sally L, Longhurst Christopher A, Millen Marlene, Sitapati Amy M, Tai-Seale Ming
Division of Ophthalmology Informatics and Data Science, Viterbi Family Department of Ophthalmology and Shiley Eye Institute, University of California San Diego, La Jolla, CA 92093, United States.
Health Department of Biomedical Informatics, University of California San Diego Health, La Jolla, CA 92093, United States.
JAMIA Open. 2024 Apr 10;7(2):ooae028. doi: 10.1093/jamiaopen/ooae028. eCollection 2024 Jul.
Electronic health record (EHR)-based patient messages can contribute to burnout. Messages with a negative tone are particularly challenging to address. In this perspective, we describe our initial evaluation of large language model (LLM)-generated responses to negative EHR patient messages and contend that using LLMs to generate initial drafts may be feasible, although refinement will be needed.
A retrospective sample ( = 50) of negative patient messages was extracted from a health system EHR, de-identified, and inputted into an LLM (ChatGPT). Qualitative analyses were conducted to compare LLM responses to actual care team responses.
Some LLM-generated draft responses varied from human responses in relational connection, informational content, and recommendations for next steps. Occasionally, the LLM draft responses could have potentially escalated emotionally charged conversations.
Further work is needed to optimize the use of LLMs for responding to negative patient messages in the EHR.
基于电子健康记录(EHR)的患者信息可能会导致职业倦怠。带有负面语气的信息尤其难以处理。从这个角度来看,我们描述了对大语言模型(LLM)生成的针对负面EHR患者信息的回复的初步评估,并认为使用大语言模型生成初稿可能是可行的,尽管还需要完善。
从一个医疗系统的EHR中提取了50条负面患者信息的回顾性样本,进行去识别处理后输入到大语言模型(ChatGPT)中。进行定性分析以比较大语言模型的回复与实际护理团队的回复。
一些由大语言模型生成的初稿回复在关系连接、信息内容和下一步建议方面与人工回复有所不同。有时,大语言模型的初稿回复可能会使情绪激动的对话升级。
需要进一步开展工作,以优化大语言模型在回复EHR中负面患者信息方面的应用。