文献检索文档翻译深度研究
Suppr Zotero 插件Zotero 插件
邀请有礼套餐&价格历史记录

新学期,新优惠

限时优惠:9月1日-9月22日

30天高级会员仅需29元

1天体验卡首发特惠仅需5.99元

了解详情
不再提醒
插件&应用
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
高级版
套餐订阅购买积分包
AI 工具
文献检索文档翻译深度研究
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2025

Evaluating the Accuracy, Completeness, and Readability of Chatbot Responses to Refractive Surgery-Related Patient Questions: A Comparative Analysis of ChatGPT and Google Gemini.

作者信息

Arslan Selva, Usta Küçükbezirci Güldeniz

机构信息

Department of Ophthalmology, University of Health Sciences, Sadi Konuk Training and Research Hospital, Istanbul, TUR.

Department of Ophthalmology, University of Health Sciences, Istanbul Training and Research Hospital, Istanbul, TUR.

出版信息

Cureus. 2025 Jul 29;17(7):e88980. doi: 10.7759/cureus.88980. eCollection 2025 Jul.


DOI:10.7759/cureus.88980
PMID:40895905
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC12398298/
Abstract

Purpose  This study evaluates the performance of ChatGPT and Google Gemini in addressing refractive surgery-related patient questions by analysing the accuracy, completeness, and readability of their responses. Methods A total of 40 refractive surgery-related questions were compiled and categorized into three levels of difficulty: easy, medium, and hard. Responses from ChatGPT and Google Gemini were blinded and evaluated by two experienced ophthalmologists using standardized criteria. Accuracy was scored on a six-point Likert scale, completeness on a three-point scale, and readability using Flesch-Kincaid Grade Level, Gunning Fog Index, Simple Measure of Gobbledygook (SMOG) Index, and word count. Intra- and inter-rater reliability were assessed using intra-class correlation coefficients (ICC). Results Both chatbots demonstrated high intra-rater (ICC>0.75) and inter-rater reliability. Accuracy scores were similar for most questions; however, statistically significant differences were observed for harder questions, where Gemini showed slightly reduced performance compared to ChatGPT. Readability metrics revealed no significant differences between the two tools, although ChatGPT responses tended to be more detailed, while Gemini generated more concise answers. Harder questions resulted in longer and more complex responses, as indicated by higher Gunning Fog and SMOG Index scores. Conclusions ChatGPT and Google Gemini exhibit strong potential in patient education, with complementary strengths in accuracy, readability, and response detail. The influence of question complexity on chatbot performance highlights the need for ongoing optimization to enhance both clarity and accessibility. These findings underscore the value of integrating artificial intelligence (AI) tools into healthcare to support patient education and engagement.

摘要

相似文献

[1]
Evaluating the Accuracy, Completeness, and Readability of Chatbot Responses to Refractive Surgery-Related Patient Questions: A Comparative Analysis of ChatGPT and Google Gemini.

Cureus. 2025-7-29

[2]
Artificial Intelligence in Peripheral Artery Disease Education: A Battle Between ChatGPT and Google Gemini.

Cureus. 2025-6-1

[3]
Performance of Advanced Artificial Intelligence Models in Pulp Therapy for Immature Permanent Teeth: A Comparison of ChatGPT-4 Omni, DeepSeek, and Gemini Advanced in Accuracy, Completeness, Response Time, and Readability.

J Endod. 2025-8-22

[4]
Readability, reliability and quality of responses generated by ChatGPT, gemini, and perplexity for the most frequently asked questions about pain.

Medicine (Baltimore). 2025-3-14

[5]
Comparison of Responses from ChatGPT-4, Google Gemini, and Google Search to Common Patient Questions About Ankle Sprains: A Readability Analysis.

J Am Acad Orthop Surg. 2025-7-3

[6]
How Accurate Is AI? A Critical Evaluation of Commonly Used Large Language Models in Responding to Patient Concerns About Incidental Kidney Tumors.

J Clin Med. 2025-8-12

[7]
Readability, Reliability, and Quality Analysis of Internet-Based Patient Education Materials and Large Language Models on Meniere's Disease.

J Otolaryngol Head Neck Surg. 2025

[8]
Evaluation of ChatGPT-4 as an Online Outpatient Assistant in Puerperal Mastitis Management: Content Analysis of an Observational Study.

JMIR Med Inform. 2025-7-24

[9]
Evaluating ChatGPT's Utility in Biologic Therapy for Systemic Lupus Erythematosus: Comparative Study of ChatGPT and Google Web Search.

JMIR Form Res. 2025-8-28

[10]
Evaluating DeepResearch and DeepThink in anterior cruciate ligament surgery patient education: ChatGPT-4o excels in comprehensiveness, DeepSeek R1 leads in clarity and readability of orthopaedic information.

Knee Surg Sports Traumatol Arthrosc. 2025-6-1

本文引用的文献

[1]
An Observational Study to Evaluate Readability and Reliability of AI-Generated Brochures for Emergency Medical Conditions.

Cureus. 2024-8-31

[2]
Recent Advances in Refractive Surgery: An Overview.

Clin Ophthalmol. 2024-9-2

[3]
Roles, Users, Benefits, and Limitations of Chatbots in Health Care: Rapid Review.

J Med Internet Res. 2024-7-23

[4]
Use of artificial intelligence chatbots in clinical management of immune-related adverse events.

J Immunother Cancer. 2024-5-30

[5]
Physician and Artificial Intelligence Chatbot Responses to Cancer Questions From Social Media.

JAMA Oncol. 2024-7-1

[6]
A Comparative Analysis of AI Models in Complex Medical Decision-Making Scenarios: Evaluating ChatGPT, Claude AI, Bard, and Perplexity.

Cureus. 2024-1-18

[7]
Performance of ChatGPT on the Chinese Postgraduate Examination for Clinical Medicine: Survey Study.

JMIR Med Educ. 2024-2-9

[8]
Exploring the Possible Use of AI Chatbots in Public Health Education: Feasibility Study.

JMIR Med Educ. 2023-11-1

[9]
Evaluating the Artificial Intelligence Performance Growth in Ophthalmic Knowledge.

Cureus. 2023-9-21

[10]
Accuracy and Reliability of Chatbot Responses to Physician Questions.

JAMA Netw Open. 2023-10-2

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

推荐工具

医学文档翻译智能文献检索