Patel Neeket R, Lacher Corey R, Huang Alan Y, Kolomeyer Anton, Bavinger J Clay, Carroll Robert M, Kim Benjamin J, Tsui Jonathan C
Institute of Ophthalmology and Visual Science, Rutgers New Jersey Medical School, Newark, NJ, 07103, USA.
Scheie Eye Institute, Department of Ophthalmology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA.
Clin Ophthalmol. 2025 Jun 3;19:1763-1769. doi: 10.2147/OPTH.S513633. eCollection 2025.
Analyze the application of large language models (LLM) to listen to and generate medical documentation in vitreoretinal clinic encounters.
Two publicly available large language models, Google Gemini 1.0 Pro and Chat GPT 3.5.
Patient-physician dialogues simulating vitreoretinal clinic scenarios were scripted to simulate real-world encounters and recorded for standardization. Two artificial intelligence engines were given the audio files to transcribe the dialogue and produce medical documentation of the encounters. Similarity of the dialogue and LLM transcription was assessed using an online comparability tool. A panel of practicing retina specialists evaluated each generated medical note.
The number of discrepancies and overall similarity of LLM text compared to scripted patient-physician dialogues, and scoring on the physician documentation quality instrument-9 (PDQI-9) of each medical note by five retina specialists.
On average, the documentation produced by AI engines scored 81.5% of total possible points in documentation quality. Similarity between pre-formed dialogue scripts and transcribed encounters was higher for ChatGPT (96.5%) compared to Gemini (90.6%, p<0.01). The mean total PDQI-9 score among all encounters from ChatGPT 3.5 (196.2/225, 87.2%) was significantly greater than Gemini 1.0 Pro (170.4/225, 75.7%, p=0.002).
The authors report the aptitude of two popular LLMs (ChatGPT 3.5 and Google Gemini 1.0 Pro) in generating medical notes based on audio recordings of scripted vitreoretinal clinical encounters using a validated medical documentation tool. Artificial intelligence can produce quality vitreoretinal clinic encounter medical notes after listening to patient-physician dialogues despite case complexity and missing encounter variables. The performance of these engines was satisfactory but sometimes included fabricated information. We demonstrate the potential utility of LLMs in reducing the documentation burden on physicians and potentially streamlining patient care.
分析大语言模型(LLM)在玻璃体视网膜门诊会诊中听取和生成医疗记录的应用情况。
两个公开可用的大语言模型,谷歌Gemini 1.0 Pro和Chat GPT 3.5。
编写模拟玻璃体视网膜门诊场景的医患对话脚本来模拟真实会诊情况,并进行录音以实现标准化。将音频文件提供给两个人工智能引擎,用于转录对话并生成会诊的医疗记录。使用在线可比性工具评估对话与大语言模型转录内容的相似度。由一组在职视网膜专家对每份生成的医疗记录进行评估。
与编写好的医患对话相比,大语言模型文本中的差异数量和总体相似度,以及五位视网膜专家对每份医疗记录在医生文档质量工具-9(PDQI-9)上的评分。
平均而言,人工智能引擎生成的文档在文档质量方面获得了总分的81.5%。与Gemini(90.6%,p<0.01)相比,ChatGPT(96.5%)的预先编写的对话脚本与转录会诊之间的相似度更高。ChatGPT 3.5所有会诊的平均总PDQI-9得分(196.2/225,87.2%)显著高于Gemini 1.0 Pro(170.4/225,75.7%,p=0.002)。
作者报告了两个流行的大语言模型(ChatGPT 3.5和谷歌Gemini 1.0 Pro)使用经过验证的医疗文档工具,根据编写好的玻璃体视网膜临床会诊录音生成医疗记录的能力。尽管病例复杂且会诊变量缺失,但人工智能在听取医患对话后仍能生成高质量的玻璃体视网膜门诊会诊医疗记录。这些引擎的表现令人满意,但有时会包含编造的信息。我们展示了大语言模型在减轻医生文档负担和潜在简化患者护理方面的潜在效用。