• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

聊天机器人谈斜视:人工智能能否成为新的患者教育者?

Chatbots talk Strabismus: Can AI become the new patient Educator?

机构信息

Ophthalmology Department, Gaziantep Islam Science and Technology University, Gaziantep, Turkey.

Ophthalmology Department, Gaziantep Islam Science and Technology University, Gaziantep, Turkey.

出版信息

Int J Med Inform. 2024 Nov;191:105592. doi: 10.1016/j.ijmedinf.2024.105592. Epub 2024 Aug 16.

DOI:10.1016/j.ijmedinf.2024.105592
PMID:39159506
Abstract

BACKGROUND

Strabismus is a common eye condition affecting both children and adults. Effective patient education is crucial for informed decision-making, but traditional methods often lack accessibility and engagement. Chatbots powered by AI have emerged as a promising solution.

AIM

This study aims to evaluate and compare the performance of three chatbots (ChatGPT, Bard, and Copilot) and a reliable website (AAPOS) in answering real patient questions about strabismus.

METHOD

Three chatbots (ChatGPT, Bard, and Copilot) were compared to a reliable website (AAPOS) using real patient questions. Metrics included accuracy (SOLO taxonomy), understandability/actionability (PEMAT), and readability (Flesch-Kincaid). We also performed a sentiment analysis to capture the emotional tone and impact of the responses.

RESULTS

The AAPOS achieved the highest mean SOLO score (4.14 ± 0.47), followed by Bard, Copilot, and ChatGPT. Bard scored highest on both PEMAT-U (74.8 ± 13.3) and PEMAT-A (66.2 ± 13.6) measures. Flesch-Kincaid Ease Scores revealed the AAPOS as the easiest to read (mean score: 55.8 ± 14.11), closely followed by Copilot. ChatGPT, and Bard had lower scores on readability. The sentiment analysis revealed exciting differences.

CONCLUSION

Chatbots, particularly Bard and Copilot, show promise in patient education for strabismus with strengths in understandability and actionability. However, the AAPOS website outperformed in accuracy and readability.

摘要

背景

斜视是一种常见的眼部疾病,影响儿童和成人。有效的患者教育对于知情决策至关重要,但传统方法往往缺乏可及性和参与度。人工智能驱动的聊天机器人已经成为一种有前途的解决方案。

目的

本研究旨在评估和比较三种聊天机器人(ChatGPT、Bard 和 Copilot)和一个可靠网站(AAPOS)在回答斜视相关真实患者问题方面的表现。

方法

使用真实患者问题比较三种聊天机器人(ChatGPT、Bard 和 Copilot)和一个可靠网站(AAPOS)。评估指标包括准确性(SOLO 分类法)、可理解性/可操作性(PEMAT)和可读性(Flesch-Kincaid)。我们还进行了情感分析,以捕捉回复的情感基调。

结果

AAPOS 的平均 SOLO 得分最高(4.14±0.47),其次是 Bard、Copilot 和 ChatGPT。Bard 在 PEMAT-U(74.8±13.3)和 PEMAT-A(66.2±13.6)方面得分最高。Flesch-Kincaid 易读性得分显示 AAPOS 最易读(平均得分:55.8±14.11),紧随其后的是 Copilot。ChatGPT 和 Bard 的可读性得分较低。情感分析显示出令人兴奋的差异。

结论

聊天机器人,特别是 Bard 和 Copilot,在斜视患者教育方面具有潜力,在可理解性和可操作性方面具有优势。然而,AAPOS 网站在准确性和可读性方面表现更为出色。

相似文献

1
Chatbots talk Strabismus: Can AI become the new patient Educator?聊天机器人谈斜视:人工智能能否成为新的患者教育者?
Int J Med Inform. 2024 Nov;191:105592. doi: 10.1016/j.ijmedinf.2024.105592. Epub 2024 Aug 16.
2
The Performance of Chatbots and the AAPOS Website as a Tool for Amblyopia Education.聊天机器人和 AAPOS 网站在弱视教育中的应用效果。
J Pediatr Ophthalmol Strabismus. 2024 Sep-Oct;61(5):325-331. doi: 10.3928/01913913-20240409-01. Epub 2024 May 30.
3
Talking technology: exploring chatbots as a tool for cataract patient education.技术漫谈:探索聊天机器人作为白内障患者教育工具的作用
Clin Exp Optom. 2025 Jan;108(1):56-64. doi: 10.1080/08164622.2023.2298812. Epub 2024 Jan 9.
4
Assessment of readability, reliability, and quality of ChatGPT®, BARD®, Gemini®, Copilot®, Perplexity® responses on palliative care.评估 ChatGPT®、BARD®、 Gemini®、Copilot®、Perplexity® 在姑息治疗方面的可读性、可靠性和质量。
Medicine (Baltimore). 2024 Aug 16;103(33):e39305. doi: 10.1097/MD.0000000000039305.
5
Decoding medical jargon: The use of AI language models (ChatGPT-4, BARD, microsoft copilot) in radiology reports.解读医学行话:人工智能语言模型(ChatGPT-4、BARD、microsoft copilot)在放射科报告中的应用。
Patient Educ Couns. 2024 Sep;126:108307. doi: 10.1016/j.pec.2024.108307. Epub 2024 May 3.
6
Evaluating the Efficacy of ChatGPT as a Patient Education Tool in Prostate Cancer: Multimetric Assessment.评估 ChatGPT 在前列腺癌患者教育中的疗效:多指标评估。
J Med Internet Res. 2024 Aug 14;26:e55939. doi: 10.2196/55939.
7
Artificial intelligence chatbots as sources of patient education material for cataract surgery: ChatGPT-4 versus Google Bard.人工智能聊天机器人作为白内障手术患者教育材料的来源:ChatGPT-4 与 Google Bard 对比。
BMJ Open Ophthalmol. 2024 Oct 17;9(1):e001824. doi: 10.1136/bmjophth-2024-001824.
8
Dr. Google to Dr. ChatGPT: assessing the content and quality of artificial intelligence-generated medical information on appendicitis.谷歌博士对 ChatGPT 博士:评估人工智能生成的关于阑尾炎的医学信息的内容和质量。
Surg Endosc. 2024 May;38(5):2887-2893. doi: 10.1007/s00464-024-10739-5. Epub 2024 Mar 5.
9
Assessing the Readability of Patient Education Materials on Cardiac Catheterization From Artificial Intelligence Chatbots: An Observational Cross-Sectional Study.评估人工智能聊天机器人提供的心脏导管插入术患者教育材料的可读性:一项观察性横断面研究。
Cureus. 2024 Jul 4;16(7):e63865. doi: 10.7759/cureus.63865. eCollection 2024 Jul.
10
Accuracy and Readability of Artificial Intelligence Chatbot Responses to Vasectomy-Related Questions: Public Beware.人工智能聊天机器人对输精管切除术相关问题回答的准确性和可读性:公众需谨慎。
Cureus. 2024 Aug 28;16(8):e67996. doi: 10.7759/cureus.67996. eCollection 2024 Aug.

引用本文的文献

1
Potential of AI Chatbots in Online Hair Transplantation Consultations: A Multi-metric Assessment of Three Models.人工智能聊天机器人在在线植发咨询中的潜力:三种模型的多指标评估
Aesthetic Plast Surg. 2025 Aug 8. doi: 10.1007/s00266-025-05103-4.
2
How Successful Is AI in Developing Postsurgical Wound Care Education Material?人工智能在开发术后伤口护理教育材料方面有多成功?
Wound Repair Regen. 2025 May-Jun;33(3):e70041. doi: 10.1111/wrr.70041.
3
Assessing the quality and readability of patient education materials on chemotherapy cardiotoxicity from artificial intelligence chatbots: An observational cross-sectional study.
评估人工智能聊天机器人提供的关于化疗心脏毒性的患者教育材料的质量和可读性:一项观察性横断面研究。
Medicine (Baltimore). 2025 Apr 11;104(15):e42135. doi: 10.1097/MD.0000000000042135.