Cohen Samuel A, Yadlapalli Nikhita, Tijerina Jonathan D, Alabiad Chrisfouad R, Chang Jessica R, Kinde Benyam, Mahoney Nicholas R, Roelofs Kelsey A, Woodward Julie A, Kossler Andrea L
Department of Ophthalmology, Stein Eye Institute at University of California Los Angeles David Geffen School of Medicine, Los Angeles, CA, USA.
Department of Ophthalmology, FIU Herbert Wertheim College of Medicine, Miami, FL, USA.
Clin Ophthalmol. 2024 Sep 21;18:2647-2655. doi: 10.2147/OPTH.S480222. eCollection 2024.
To compare the accuracy and readability of responses to oculoplastics patient questions provided by Google and ChatGPT. Additionally, to assess the ability of ChatGPT to create customized patient education materials.
We executed a Google search to identify the 3 most frequently asked patient questions (FAQs) related to 10 oculoplastics conditions. FAQs were entered into both the Google search engine and the ChatGPT tool and responses were recorded. Responses were graded for readability using five validated readability indices and for accuracy by six oculoplastics surgeons. ChatGPT was instructed to create patient education materials at various reading levels for 8 oculoplastics procedures. The accuracy and readability of ChatGPT-generated procedural explanations were assessed.
ChatGPT responses to patient FAQs were written at a significantly higher average grade level than Google responses (grade 15.6 vs 10.0, p < 0.001). ChatGPT responses (93% accuracy) were significantly more accurate (p < 0.001) than Google responses (78% accuracy) and were preferred by expert panelists (79%). ChatGPT accurately explained oculoplastics procedures at an above average reading level. When instructed to rewrite patient education materials at a lower reading level, grade level was reduced by approximately 4 (15.7 vs 11.7, respectively, p < 0.001) without sacrificing accuracy.
ChatGPT has the potential to provide patients with accurate information regarding their oculoplastics conditions. ChatGPT may also be utilized by oculoplastic surgeons as an accurate tool to provide customizable patient education for patients with varying health literacy. A better understanding of oculoplastics conditions and procedures amongst patients can lead to informed eye care decisions.
比较谷歌和ChatGPT对眼整形患者问题回答的准确性和可读性。此外,评估ChatGPT创建定制患者教育材料的能力。
我们进行了一次谷歌搜索,以确定与10种眼整形疾病相关的3个最常见的患者问题(常见问题)。将常见问题输入谷歌搜索引擎和ChatGPT工具,并记录回答。使用五个经过验证的可读性指标对回答的可读性进行评分,并由六位眼整形外科医生对准确性进行评分。要求ChatGPT为8种眼整形手术创建不同阅读水平的患者教育材料。评估ChatGPT生成的手术解释的准确性和可读性。
ChatGPT对患者常见问题的回答平均年级水平明显高于谷歌的回答(15.6年级对10.0年级,p<0.001)。ChatGPT的回答(准确率93%)比谷歌的回答(准确率78%)明显更准确(p<0.001),并且更受专家小组成员的青睐(79%)。ChatGPT以高于平均水平的阅读水平准确解释了眼整形手术。当被要求以较低的阅读水平重写患者教育材料时,年级水平降低了约4(分别为15.7对11.7,p<0.001),而不影响准确性。
ChatGPT有潜力为患者提供有关其眼整形疾病的准确信息。眼整形外科医生也可以将ChatGPT用作一种准确的工具,为健康素养不同的患者提供可定制的患者教育。患者对眼整形疾病和手术有更好的了解可以做出明智的眼部护理决策。