Centre for Depression, Anxiety Disorders and Psychotherapy, Psychiatric University Hospital Zurich (PUK), Zurich, Switzerland; Faculty of Medicine, University of Zurich (UZH), Zurich, Switzerland.
Center for Acute Psychiatry and Psychotherapy, Psychiatric University Hospital Zurich (PUK), Zurich, Switzerland; Faculty of Medicine, University of Zurich (UZH), Zurich, Switzerland.
Int J Med Inform. 2024 Dec;192:105654. doi: 10.1016/j.ijmedinf.2024.105654. Epub 2024 Oct 14.
To evaluate whether psychiatric discharge summaries (DS) generated with ChatGPT-4 from electronic health records (EHR) can match the quality of DS written by psychiatric residents.
At a psychiatric primary care hospital, we compared 20 inpatient DS, written by residents, to those written with ChatGPT-4 from pseudonymized residents' notes of the patients' EHRs and a standardized prompt. 8 blinded psychiatry specialists rated both versions on a custom Likert scale from 1 to 5 across 15 quality subcategories. The primary outcome was the overall rating difference between the two groups. The secondary outcomes were the rating differences at the level of individual question, case, and rater.
Human-written DS were rated significantly higher than AI (mean ratings: human 3.78, AI 3.12, p < 0.05). They surpassed AI significantly in 12/15 questions and 16/20 cases and were favored significantly by 7/8 raters. For "low expected correction effort", human DS were rated as 67 % favorable, 19 % neutral, and 14 % unfavorable, whereas AI-DS were rated as 22 % favorable, 33 % neutral, and 45 % unfavorable. Hallucinations were present in 40 % of AI-DS, with 37.5 % deemed highly clinically relevant. Minor content mistakes were found in 30 % of AI and 10 % of human DS. Raters correctly identified AI-DS with 81 % sensitivity and 75 % specificity.
Overall, AI-DS did not match the quality of resident-written DS but performed similarly in 20% of cases and were rated as favorable for "low expected correction effort" in 22% of cases. AI-DS lacked most in content specificity, ability to distill key case information, and coherence but performed adequately in conciseness, adherence to formalities, relevance of included content, and form.
LLM-written DS show potential as templates for physicians to finalize, potentially saving time in the future.
评估使用 ChatGPT-4 从电子病历(EHR)生成的精神科出院小结(DS)是否能与精神科住院医师撰写的 DS 质量相匹配。
在一家精神科初级保健医院,我们比较了 20 份由住院医师撰写的住院 DS,以及使用 ChatGPT-4 根据患者 EHR 中匿名住院医师记录和标准化提示生成的 DS。20 名盲法精神科专家使用定制的李克特量表(1 到 5 分)对这两个版本在 15 个质量子类别中的每个子类别进行评分。主要结果是两组之间的总体评分差异。次要结果是在个别问题、病例和评分者层面的评分差异。
人工撰写的 DS 的评分明显高于 AI(平均评分:人工 3.78,AI 3.12,p<0.05)。在 12/15 个问题和 16/20 个病例中,人工撰写的 DS 明显优于 AI,在 7/8 名评分者中也明显受到青睐。对于“低预期修正难度”,人工 DS 的评分分别为 67%的有利、19%的中立和 14%的不利,而 AI-DS 的评分分别为 22%的有利、33%的中立和 45%的不利。AI-DS 中存在 40%的幻觉,其中 37.5%被认为具有高度临床相关性。AI 中发现 30%的内容有小错误,而人工 DS 中发现 10%的内容有小错误。评分者以 81%的灵敏度和 75%的特异性正确识别 AI-DS。
总的来说,AI-DS 的质量不如住院医师撰写的 DS,但在 20%的病例中表现相似,并且在 22%的病例中被评为“低预期修正难度”的有利。AI-DS 在内容特异性、提取关键病例信息的能力和连贯性方面表现最差,但在简洁性、形式上的遵循、包含内容的相关性和格式方面表现得足够好。
LLM 撰写的 DS 有可能成为医生最终确定的模板,在未来可能会节省时间。