• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

相似文献

1
ChatGPT and Google Assistant as a Source of Patient Education for Patients With Amblyopia: Content Analysis.ChatGPT 和 Google Assistant 作为弱视患者教育资源:内容分析。
J Med Internet Res. 2024 Aug 15;26:e52401. doi: 10.2196/52401.
2
Prescription of Controlled Substances: Benefits and Risks管制药品的处方:益处与风险
3
Is Information About Musculoskeletal Malignancies From Large Language Models or Web Resources at a Suitable Reading Level for Patients?来自大语言模型或网络资源的关于肌肉骨骼恶性肿瘤的信息对患者来说是否处于合适的阅读水平?
Clin Orthop Relat Res. 2025 Feb 1;483(2):306-315. doi: 10.1097/CORR.0000000000003263. Epub 2024 Sep 25.
4
Using Artificial Intelligence ChatGPT to Access Medical Information About Chemical Eye Injuries: Comparative Study.使用人工智能ChatGPT获取有关化学性眼外伤的医学信息:比较研究
JMIR Form Res. 2025 Aug 13;9:e73642. doi: 10.2196/73642.
5
Sexual Harassment and Prevention Training性骚扰与预防培训
6
Can Artificial Intelligence Improve the Readability of Patient Education Materials?人工智能能否提高患者教育材料的可读性?
Clin Orthop Relat Res. 2023 Nov 1;481(11):2260-2267. doi: 10.1097/CORR.0000000000002668. Epub 2023 Apr 28.
7
Artificial Intelligence in Peripheral Artery Disease Education: A Battle Between ChatGPT and Google Gemini.外周动脉疾病教育中的人工智能:ChatGPT与谷歌Gemini的较量
Cureus. 2025 Jun 1;17(6):e85174. doi: 10.7759/cureus.85174. eCollection 2025 Jun.
8
Consequences, costs and cost-effectiveness of workforce configurations in English acute hospitals.英国急症医院劳动力配置的后果、成本及成本效益
Health Soc Care Deliv Res. 2025 Jul;13(25):1-107. doi: 10.3310/ZBAR9152.
9
Enhancing the Readability of Online Patient Education Materials Using Large Language Models: Cross-Sectional Study.使用大语言模型提高在线患者教育材料的可读性:横断面研究。
J Med Internet Res. 2025 Jun 4;27:e69955. doi: 10.2196/69955.
10
Using Artificial Intelligence ChatGPT to Access Medical Information about Chemical Eye Injuries: A Comparative Study.使用人工智能ChatGPT获取有关化学性眼外伤的医学信息:一项比较研究。
JMIR Form Res. 2025 Jun 30. doi: 10.2196/73642.

引用本文的文献

1
Efficacy of Dichoptic Treatment vs Eye Patching in Pediatric Patients with Amblyopia: A Systematic Review and Meta-Analysis of Randomized Controlled Trials.双眼分视治疗与眼罩遮盖治疗小儿弱视的疗效比较:随机对照试验的系统评价与Meta分析
Clin Ophthalmol. 2025 Jun 26;19:1999-2009. doi: 10.2147/OPTH.S513329. eCollection 2025.
2
Chinese generative AI models (DeepSeek and Qwen) rival ChatGPT-4 in ophthalmology queries with excellent performance in Arabic and English.中国生成式人工智能模型(通义千问和文心一言)在眼科问题查询方面可与ChatGPT-4相媲美,在阿拉伯语和英语方面表现出色。
Narra J. 2025 Apr;5(1):e2371. doi: 10.52225/narra.v5i1.2371. Epub 2025 Apr 8.

本文引用的文献

1
Harnessing brain plasticity to improve binocular vision in amblyopia: An evidence-based update.利用大脑可塑性改善弱视的双眼视觉:基于证据的更新
Eur J Ophthalmol. 2024 Jul;34(4):901-912. doi: 10.1177/11206721231187426. Epub 2023 Jul 10.
2
Evaluation of Reading Level of Result Letters Sent to Patients from an Academic Primary Care Practice.对学术性基层医疗诊所发给患者的结果信函阅读水平的评估。
Health Serv Res Manag Epidemiol. 2023 Apr 25;10:23333928231172142. doi: 10.1177/23333928231172142. eCollection 2023 Jan-Dec.
3
Medicine in the Era of Artificial Intelligence: Hey Chatbot, Write Me an H&P.人工智能时代的医学:嘿,聊天机器人,给我写一份病史和体格检查报告。
JAMA Intern Med. 2023 Jun 1;183(6):507-508. doi: 10.1001/jamainternmed.2023.1832.
4
How Chatbots and Large Language Model Artificial Intelligence Systems Will Reshape Modern Medicine: Fountain of Creativity or Pandora's Box?聊天机器人和大语言模型人工智能系统将如何重塑现代医学:创造力之源还是潘多拉魔盒?
JAMA Intern Med. 2023 Jun 1;183(6):596-597. doi: 10.1001/jamainternmed.2023.1835.
5
Comparing Physician and Artificial Intelligence Chatbot Responses to Patient Questions Posted to a Public Social Media Forum.比较医生和人工智能聊天机器人对发布在公共社交媒体论坛上的患者问题的回复。
JAMA Intern Med. 2023 Jun 1;183(6):589-596. doi: 10.1001/jamainternmed.2023.1838.
6
Potential Use of Artificial Intelligence in Infectious Disease: Take ChatGPT as an Example.人工智能在传染病学中的潜在应用:以 ChatGPT 为例。
Ann Biomed Eng. 2023 Jun;51(6):1130-1135. doi: 10.1007/s10439-023-03203-3. Epub 2023 Apr 19.
7
The Role of ChatGPT, Generative Language Models, and Artificial Intelligence in Medical Education: A Conversation With ChatGPT and a Call for Papers.ChatGPT、生成式语言模型和人工智能在医学教育中的作用:与ChatGPT的对话及论文征集
JMIR Med Educ. 2023 Mar 6;9:e46885. doi: 10.2196/46885.
8
Diagnostic Accuracy of Differential-Diagnosis Lists Generated by Generative Pretrained Transformer 3 Chatbot for Clinical Vignettes with Common Chief Complaints: A Pilot Study.基于生成式预训练 Transformer 3 聊天机器人为常见主诉临床病例生成鉴别诊断列表的诊断准确性:一项初步研究。
Int J Environ Res Public Health. 2023 Feb 15;20(4):3378. doi: 10.3390/ijerph20043378.
9
The Global Prevalence of Amblyopia in Children: A Systematic Review and Meta-Analysis.儿童弱视的全球患病率:一项系统评价与荟萃分析
Front Pediatr. 2022 May 4;10:819998. doi: 10.3389/fped.2022.819998. eCollection 2022.
10
Towards a more patient-centered clinical trial process: A systematic review of interventions incorporating health literacy best practices.迈向更以患者为中心的临床试验流程:纳入健康素养最佳实践的干预措施的系统评价。
Contemp Clin Trials. 2022 May;116:106733. doi: 10.1016/j.cct.2022.106733. Epub 2022 Mar 15.

ChatGPT 和 Google Assistant 作为弱视患者教育资源:内容分析。

ChatGPT and Google Assistant as a Source of Patient Education for Patients With Amblyopia: Content Analysis.

机构信息

University of California, San Francisco School of Medicine, San Francisco, CA, United States.

McGovern Medical School, University of Texas Health Science Center at Houston, Houston, CA, United States.

出版信息

J Med Internet Res. 2024 Aug 15;26:e52401. doi: 10.2196/52401.

DOI:10.2196/52401
PMID:39146013
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC11362700/
Abstract

BACKGROUND

We queried ChatGPT (OpenAI) and Google Assistant about amblyopia and compared their answers with the keywords found on the American Association for Pediatric Ophthalmology and Strabismus (AAPOS) website, specifically the section on amblyopia. Out of the 26 keywords chosen from the website, ChatGPT included 11 (42%) in its responses, while Google included 8 (31%).

OBJECTIVE

Our study investigated the adherence of ChatGPT-3.5 and Google Assistant to the guidelines of the AAPOS for patient education on amblyopia.

METHODS

ChatGPT-3.5 was used. The four questions taken from the AAPOS website, specifically its glossary section for amblyopia, are as follows: (1) What is amblyopia? (2) What causes amblyopia? (3) How is amblyopia treated? (4) What happens if amblyopia is untreated? Approved and selected by ophthalmologists (GW and DL), the keywords from AAPOS were words or phrases that deemed significant for the education of patients with amblyopia. The "Flesch-Kincaid Grade Level" formula, approved by the US Department of Education, was used to evaluate the reading comprehension level for the responses from ChatGPT, Google Assistant, and AAPOS.

RESULTS

In their responses, ChatGPT did not mention the term "ophthalmologist," whereas Google Assistant and AAPOS both mentioned the term once and twice, respectively. ChatGPT did, however, use the term "eye doctors" once. According to the Flesch-Kincaid test, the average reading level of AAPOS was 11.4 (SD 2.1; the lowest level) while that of Google was 13.1 (SD 4.8; the highest required reading level), also showing the greatest variation in grade level in its responses. ChatGPT's answers, on average, scored 12.4 (SD 1.1) grade level. They were all similar in terms of difficulty level in reading. For the keywords, out of the 4 responses, ChatGPT used 42% (11/26) of the keywords, whereas Google Assistant used 31% (8/26).

CONCLUSIONS

ChatGPT trains on texts and phrases and generates new sentences, while Google Assistant automatically copies website links. As ophthalmologists, we should consider including "see an ophthalmologist" on our websites and journals. While ChatGPT is here to stay, we, as physicians, need to monitor its answers.

摘要

背景

我们向 ChatGPT(OpenAI)和 Google Assistant 询问了关于弱视的问题,并将他们的答案与美国儿科学会眼科与斜视分会(AAPOS)网站上的关键词进行了比较,特别是弱视部分。在从网站上选择的 26 个关键词中,ChatGPT 在其回答中包含了 11 个(42%),而 Google 包含了 8 个(31%)。

目的

我们的研究调查了 ChatGPT-3.5 和 Google Assistant 对 AAPOS 关于弱视患者教育指南的遵循情况。

方法

使用 ChatGPT-3.5。从 AAPOS 网站上选取了四个问题,特别是其弱视词汇表部分,分别是:(1)什么是弱视?(2)弱视的原因是什么?(3)弱视如何治疗?(4)如果不治疗弱视会怎样?眼科医生(GW 和 DL)认可并选择了 AAPOS 的关键词,这些关键词是认为对弱视患者教育有重要意义的单词或短语。美国教育部认可的“Flesch-Kincaid 阅读水平测试”公式用于评估 ChatGPT、Google Assistant 和 AAPOS 回答的阅读理解水平。

结果

在他们的回答中,ChatGPT 没有提到“眼科医生”一词,而 Google Assistant 和 AAPOS 分别提到了一次和两次。然而,ChatGPT 确实使用了一次“眼科医生”一词。根据 Flesch-Kincaid 测试,AAPOS 的平均阅读水平为 11.4(标准差 2.1;最低水平),而 Google 的为 13.1(标准差 4.8;最高要求的阅读水平),其回答的阅读水平变化也最大。ChatGPT 的平均回答得分 12.4(标准差 1.1)。它们在阅读难度水平上都相似。对于关键词,在 4 个回答中,ChatGPT 使用了 42%(11/26)的关键词,而 Google Assistant 使用了 31%(8/26)。

结论

ChatGPT 通过训练文本和短语生成新的句子,而 Google Assistant 则自动复制网站链接。作为眼科医生,我们应该考虑在我们的网站和期刊上添加“看眼科医生”。虽然 ChatGPT 已经存在,但作为医生,我们需要监控它的回答。