• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

相似文献

1
ChatGPT: is it good for our glaucoma patients?ChatGPT:它对我们的青光眼患者有益吗?
Front Ophthalmol (Lausanne). 2023 Nov 16;3:1260415. doi: 10.3389/fopht.2023.1260415. eCollection 2023.
2
ChatGPT and Google Assistant as a Source of Patient Education for Patients With Amblyopia: Content Analysis.ChatGPT 和 Google Assistant 作为弱视患者教育资源:内容分析。
J Med Internet Res. 2024 Aug 15;26:e52401. doi: 10.2196/52401.
3
Readability analysis of ChatGPT's responses on lung cancer.肺癌相关问题的 ChatGPT 回复可读性分析。
Sci Rep. 2024 Jul 26;14(1):17234. doi: 10.1038/s41598-024-67293-2.
4
Performance of Artificial Intelligence Chatbots on Glaucoma Questions Adapted From Patient Brochures.人工智能聊天机器人对改编自患者手册的青光眼问题的回答情况。
Cureus. 2024 Mar 23;16(3):e56766. doi: 10.7759/cureus.56766. eCollection 2024 Mar.
5
Information Quality and Readability: ChatGPT's Responses to the Most Common Questions About Spinal Cord Injury.信息质量与可读性:ChatGPT 对脊髓损伤常见问题的回答
World Neurosurg. 2024 Jan;181:e1138-e1144. doi: 10.1016/j.wneu.2023.11.062. Epub 2023 Nov 22.
6
Assessment of online patient education materials from major ophthalmologic associations.主要眼科协会在线患者教育材料评估。
JAMA Ophthalmol. 2015 Apr;133(4):449-54. doi: 10.1001/jamaophthalmol.2014.6104.
7
Evaluating the accuracy and readability of ChatGPT in providing parental guidance for adenoidectomy, tonsillectomy, and ventilation tube insertion surgery.评估 ChatGPT 在提供腺样体切除术、扁桃体切除术和通气管插入手术的家长指导方面的准确性和可读性。
Int J Pediatr Otorhinolaryngol. 2024 Jun;181:111998. doi: 10.1016/j.ijporl.2024.111998. Epub 2024 May 31.
8
Optimizing Ophthalmology Patient Education via ChatBot-Generated Materials: Readability Analysis of AI-Generated Patient Education Materials and The American Society of Ophthalmic Plastic and Reconstructive Surgery Patient Brochures.通过聊天机器人生成的材料优化眼科患者教育:人工智能生成的患者教育材料和美国眼科整形重建外科学会患者手册的可读性分析。
Ophthalmic Plast Reconstr Surg. 2024;40(2):212-216. doi: 10.1097/IOP.0000000000002549. Epub 2023 Nov 16.
9
Bridging the Gap Between Urological Research and Patient Understanding: The Role of Large Language Models in Automated Generation of Layperson's Summaries.弥合泌尿科研究与患者理解之间的差距:大型语言模型在生成非专业人士摘要方面的作用。
Urol Pract. 2023 Sep;10(5):436-443. doi: 10.1097/UPJ.0000000000000428. Epub 2023 Jul 5.
10
Appraisal of ChatGPT's responses to common patient questions regarding Tommy John surgery.评估ChatGPT对关于汤米·约翰手术的常见患者问题的回答。
Shoulder Elbow. 2024 Jul;16(4):429-435. doi: 10.1177/17585732241259754. Epub 2024 Sep 20.

引用本文的文献

1
Multimodal reasoning agent for enhanced ophthalmic decision-making: a preliminary real-world clinical validation.用于增强眼科决策的多模态推理智能体:一项初步的真实世界临床验证
Front Cell Dev Biol. 2025 Jul 23;13:1642539. doi: 10.3389/fcell.2025.1642539. eCollection 2025.
2
Online platform vs. doctors: a comparative exploration of congenital cataract patient education from virtual to reality.在线平台与医生:先天性白内障患者教育从虚拟到现实的比较探索
Front Artif Intell. 2025 Jun 3;8:1548385. doi: 10.3389/frai.2025.1548385. eCollection 2025.
3
Evaluating Chatbot responses to patient questions in the field of glaucoma.评估聊天机器人对青光眼领域患者问题的回答。
Front Med (Lausanne). 2024 Jul 9;11:1359073. doi: 10.3389/fmed.2024.1359073. eCollection 2024.
4
Artificial Versus Human Intelligence in the Diagnostic Approach of Ophthalmic Case Scenarios: A Qualitative Evaluation of Performance and Consistency.眼科病例诊断方法中的人工智能与人类智能:性能与一致性的定性评估
Cureus. 2024 Jun 16;16(6):e62471. doi: 10.7759/cureus.62471. eCollection 2024 Jun.
5
No-boundary thinking for artificial intelligence in bioinformatics and education.生物信息学与教育领域中人工智能的无边界思维。
Front Bioinform. 2024 Jan 8;3:1332902. doi: 10.3389/fbinf.2023.1332902. eCollection 2023.

本文引用的文献

1
Medicine in the Era of Artificial Intelligence: Hey Chatbot, Write Me an H&P.人工智能时代的医学:嘿,聊天机器人,给我写一份病史和体格检查报告。
JAMA Intern Med. 2023 Jun 1;183(6):507-508. doi: 10.1001/jamainternmed.2023.1832.
2
How Chatbots and Large Language Model Artificial Intelligence Systems Will Reshape Modern Medicine: Fountain of Creativity or Pandora's Box?聊天机器人和大语言模型人工智能系统将如何重塑现代医学:创造力之源还是潘多拉魔盒?
JAMA Intern Med. 2023 Jun 1;183(6):596-597. doi: 10.1001/jamainternmed.2023.1835.
3
Comparing Physician and Artificial Intelligence Chatbot Responses to Patient Questions Posted to a Public Social Media Forum.比较医生和人工智能聊天机器人对发布在公共社交媒体论坛上的患者问题的回复。
JAMA Intern Med. 2023 Jun 1;183(6):589-596. doi: 10.1001/jamainternmed.2023.1838.
4
The Role of ChatGPT, Generative Language Models, and Artificial Intelligence in Medical Education: A Conversation With ChatGPT and a Call for Papers.ChatGPT、生成式语言模型和人工智能在医学教育中的作用:与ChatGPT的对话及论文征集
JMIR Med Educ. 2023 Mar 6;9:e46885. doi: 10.2196/46885.
5
Diagnostic Accuracy of Differential-Diagnosis Lists Generated by Generative Pretrained Transformer 3 Chatbot for Clinical Vignettes with Common Chief Complaints: A Pilot Study.基于生成式预训练 Transformer 3 聊天机器人为常见主诉临床病例生成鉴别诊断列表的诊断准确性:一项初步研究。
Int J Environ Res Public Health. 2023 Feb 15;20(4):3378. doi: 10.3390/ijerph20043378.
6
Barriers to Care in the Treatment of Glaucoma: Socioeconomic Elements That Impact the Diagnosis, Treatment, and Outcomes in Glaucoma Patients.青光眼治疗中的就医障碍:影响青光眼患者诊断、治疗及预后的社会经济因素
Curr Ophthalmol Rep. 2022;10(3):85-90. doi: 10.1007/s40135-022-00292-6. Epub 2022 Jul 27.
7
Towards a more patient-centered clinical trial process: A systematic review of interventions incorporating health literacy best practices.迈向更以患者为中心的临床试验流程:纳入健康素养最佳实践的干预措施的系统评价。
Contemp Clin Trials. 2022 May;116:106733. doi: 10.1016/j.cct.2022.106733. Epub 2022 Mar 15.
8
Readability of Online Patient Education Materials for Glaucoma.青光眼在线患者教育资料的可读性。
J Glaucoma. 2022 Jun 1;31(6):438-442. doi: 10.1097/IJG.0000000000002012. Epub 2022 Mar 11.
9
Trustworthy Augmented Intelligence in Health Care.可信的医疗增强人工智能。
J Med Syst. 2022 Jan 12;46(2):12. doi: 10.1007/s10916-021-01790-z.
10
Relationship Between Coronavirus-Related eHealth Literacy and COVID-19 Knowledge, Attitudes, and Practices among US Adults: Web-Based Survey Study.美国成年人中与冠状病毒相关的电子健康素养与COVID-19知识、态度和行为之间的关系:基于网络的调查研究
J Med Internet Res. 2021 Mar 29;23(3):e25042. doi: 10.2196/25042.

ChatGPT:它对我们的青光眼患者有益吗?

ChatGPT: is it good for our glaucoma patients?

作者信息

Wu Gloria, Lee David A, Zhao Weichen, Wong Adrial, Sidhu Sahej

机构信息

Department of Ophthalmology, University of California San Francisco, San Francisco, CA, United States.

Department of Ophthalmology, McGovern Medical School, University of Texas Health Science Center, Houston, TX, United States.

出版信息

Front Ophthalmol (Lausanne). 2023 Nov 16;3:1260415. doi: 10.3389/fopht.2023.1260415. eCollection 2023.

DOI:10.3389/fopht.2023.1260415
PMID:38983063
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC11182305/
Abstract

PURPOSE

Our study investigates ChatGPT and its ability to communicate with glaucoma patients.

METHODS

We inputted eight glaucoma-related questions/topics found on the American Academy of Ophthalmology (AAO)'s website into ChatGPT. We used the Flesch-Kincaid test, Gunning Fog Index, SMOG Index, and Dale-Chall readability formula to evaluate the comprehensibility of its responses for patients. ChatGPT's answers were compared with those found on the AAO's website.

RESULTS

ChatGPT's responses required reading comprehension of a higher grade level (average = grade 12.5 ± 1.6) than that of the text on the AAO's website (average = 9.4 grade ± 3.5), (0.0384). For the eight responses, the key ophthalmic terms appeared 34 out of 86 times in the ChatGPT responses vs. 86 out of 86 times in the text on the AAO's website. The term "eye doctor" appeared once in the ChatGPT text, but the formal term "ophthalmologist" did not appear. The term "ophthalmologist" appears 26 times on the AAO's website. The word counts of the answers produced by ChatGPT and those on the AAO's website were similar ( = 0.571), with phrases of a homogenous length.

CONCLUSION

ChatGPT trains on the texts, phrases, and algorithms inputted by software engineers. As ophthalmologists, through our websites and journals, we should consider encoding the phrase "see an ophthalmologist". Our medical assistants should sit with patients during their appointments to ensure that the text is accurate and that they fully comprehend its meaning. ChatGPT is effective for providing general information such as definitions or potential treatment options for glaucoma. However, ChatGPT has a tendency toward repetitive answers and, due to their elevated readability scores, these could be too difficult for a patient to read.

摘要

目的

我们的研究调查了ChatGPT及其与青光眼患者沟通的能力。

方法

我们将在美国眼科学会(AAO)网站上找到的八个与青光眼相关的问题/主题输入ChatGPT。我们使用弗莱施-金凯德测试、冈宁雾度指数、烟雾指数和戴尔-查尔可读性公式来评估其回答对患者的可理解性。将ChatGPT的答案与AAO网站上的答案进行比较。

结果

ChatGPT的回答所需的阅读理解年级水平(平均=12.5±1.6年级)高于AAO网站上的文本(平均=9.4±3.5年级),(P=0.0384)。对于这八个回答,关键眼科术语在ChatGPT的回答中出现了34次(共86次),而在AAO网站的文本中出现了86次(共86次)。“眼科医生”一词在ChatGPT的文本中出现了一次,但正式术语“眼科专家”未出现。“眼科专家”一词在AAO网站上出现了26次。ChatGPT生成的答案和AAO网站上的答案的单词数相似(P=0.571),短语长度均匀。

结论

ChatGPT基于软件工程师输入的文本、短语和算法进行训练。作为眼科医生,我们应该通过我们的网站和期刊考虑编入“看眼科专家”这一短语。我们的医疗助理在患者预约就诊时应与患者一起,以确保文本准确且患者完全理解其含义。ChatGPT在提供一般信息(如青光眼的定义或潜在治疗方案)方面是有效的。然而,ChatGPT有重复回答的倾向,并且由于其较高的可读性分数,这些回答对患者来说可能太难读懂。