• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

相似文献

1
Evolution of patient education materials from large-language artificial intelligence models on complex regional pain syndrome: are patients learning?基于大语言人工智能模型的复杂区域疼痛综合征患者教育材料的演变:患者有在学习吗?
Proc (Bayl Univ Med Cent). 2025 Feb 28;38(3):221-226. doi: 10.1080/08998280.2025.2470033. eCollection 2025.
2
Evaluation of Patient Education Materials From Large-Language Artificial Intelligence Models on Carpal Tunnel Release.基于大语言人工智能模型的腕管松解术患者教育材料评估
Hand (N Y). 2024 Apr 25:15589447241247332. doi: 10.1177/15589447241247332.
3
Dr. Google vs. Dr. ChatGPT: Exploring the Use of Artificial Intelligence in Ophthalmology by Comparing the Accuracy, Safety, and Readability of Responses to Frequently Asked Patient Questions Regarding Cataracts and Cataract Surgery.谷歌医生与ChatGPT医生:通过比较关于白内障及白内障手术的常见患者问题的回答的准确性、安全性和可读性,探索人工智能在眼科领域的应用。
Semin Ophthalmol. 2024 Aug;39(6):472-479. doi: 10.1080/08820538.2024.2326058. Epub 2024 Mar 22.
4
Appropriateness and readability of Google Bard and ChatGPT-3.5 generated responses for surgical treatment of glaucoma.谷歌巴德和 ChatGPT-3.5 生成的青光眼手术治疗回复的适宜性和可读性。
Rom J Ophthalmol. 2024 Jul-Sep;68(3):243-248. doi: 10.22336/rjo.2024.45.
5
ChatGPT as a Source of Patient Information for Lumbar Spinal Fusion and Laminectomy: A Comparative Analysis Against Google Web Search.ChatGPT 作为腰椎融合和椎板切除术患者信息的来源:与谷歌网页搜索的对比分析。
Clin Spine Surg. 2024 Dec 1;37(10):E394-E403. doi: 10.1097/BSD.0000000000001582. Epub 2024 Feb 20.
6
ChatGPT-4 Generates More Accurate and Complete Responses to Common Patient Questions About Anterior Cruciate Ligament Reconstruction Than Google's Search Engine.与谷歌搜索引擎相比,ChatGPT-4对前交叉韧带重建常见患者问题的回答更准确、更完整。
Arthrosc Sports Med Rehabil. 2024 Apr 9;6(3):100939. doi: 10.1016/j.asmr.2024.100939. eCollection 2024 Jun.
7
Consulting the Digital Doctor: Google Versus ChatGPT as Sources of Information on Breast Implant-Associated Anaplastic Large Cell Lymphoma and Breast Implant Illness.咨询数字医生:谷歌与 ChatGPT 在乳房植入物相关间变大细胞淋巴瘤和乳房植入物病信息源方面的比较。
Aesthetic Plast Surg. 2024 Feb;48(4):590-607. doi: 10.1007/s00266-023-03713-4. Epub 2023 Oct 30.
8
Comparing the Accuracy and Readability of Glaucoma-related Question Responses and Educational Materials by Google and ChatGPT.比较谷歌和ChatGPT生成的青光眼相关问题答案及教育材料的准确性和可读性。
J Curr Glaucoma Pract. 2024 Jul-Sep;18(3):110-116. doi: 10.5005/jp-journals-10078-1448. Epub 2024 Oct 29.
9
Optimizing Ophthalmology Patient Education via ChatBot-Generated Materials: Readability Analysis of AI-Generated Patient Education Materials and The American Society of Ophthalmic Plastic and Reconstructive Surgery Patient Brochures.通过聊天机器人生成的材料优化眼科患者教育:人工智能生成的患者教育材料和美国眼科整形重建外科学会患者手册的可读性分析。
Ophthalmic Plast Reconstr Surg. 2024;40(2):212-216. doi: 10.1097/IOP.0000000000002549. Epub 2023 Nov 16.
10
Assessing the Quality and Reliability of ChatGPT's Responses to Radiotherapy-Related Patient Queries: Comparative Study With GPT-3.5 and GPT-4.评估ChatGPT对放疗相关患者问题回答的质量和可靠性:与GPT-3.5和GPT-4的比较研究
JMIR Cancer. 2025 Apr 16;11:e63677. doi: 10.2196/63677.

本文引用的文献

1
Comparative Accuracy of ChatGPT 4.0 and Google Gemini in Answering Pediatric Radiology Text-Based Questions.ChatGPT 4.0与谷歌Gemini在回答基于文本的儿科放射学问题时的比较准确性
Cureus. 2024 Oct 5;16(10):e70897. doi: 10.7759/cureus.70897. eCollection 2024 Oct.
2
Can ChatGPT-4 Diagnose and Treat Like an Orthopaedic Surgeon? Testing Clinical Decision Making and Diagnostic Ability in Soft-Tissue Pathologies of the Foot and Ankle.ChatGPT-4能否像骨科医生一样进行诊断和治疗?测试其在足踝部软组织病变中的临床决策和诊断能力。
J Am Acad Orthop Surg. 2024 Oct 15;33(16):917-923. doi: 10.5435/JAAOS-D-24-00595.
3
Are large language models a useful resource to address common patient concerns on hallux valgus? A readability analysis.大语言模型是解决患者关于拇外翻常见担忧的有用资源吗?一项可读性分析。
Foot Ankle Surg. 2025 Jan;31(1):15-19. doi: 10.1016/j.fas.2024.08.002. Epub 2024 Aug 6.
4
Hallucination Rates and Reference Accuracy of ChatGPT and Bard for Systematic Reviews: Comparative Analysis.幻觉发生率和 ChatGPT 与 Bard 用于系统评价的参考准确性:比较分析。
J Med Internet Res. 2024 May 22;26:e53164. doi: 10.2196/53164.
5
Performance of ChatGPT on NASS Clinical Guidelines for the Diagnosis and Treatment of Low Back Pain: A Comparison Study.ChatGPT 在 NASS 腰痛诊断和治疗临床指南中的表现:一项对比研究。
Spine (Phila Pa 1976). 2024 May 1;49(9):640-651. doi: 10.1097/BRS.0000000000004915. Epub 2024 Jan 12.
6
The complex regional pain syndrome: Diagnosis and management strategies.复杂区域性疼痛综合征:诊断与管理策略。
Neurosciences (Riyadh). 2023 Oct;28(4):211-219. doi: 10.17712/nsj.2023.4.20230034.
7
ChatGPT Utility in Healthcare Education, Research, and Practice: Systematic Review on the Promising Perspectives and Valid Concerns.ChatGPT在医学教育、研究与实践中的应用:对其前景与合理担忧的系统评价
Healthcare (Basel). 2023 Mar 19;11(6):887. doi: 10.3390/healthcare11060887.
8
Performance of ChatGPT on USMLE: Potential for AI-assisted medical education using large language models.ChatGPT在美国医师执照考试中的表现:使用大语言模型进行人工智能辅助医学教育的潜力。
PLOS Digit Health. 2023 Feb 9;2(2):e0000198. doi: 10.1371/journal.pdig.0000198. eCollection 2023 Feb.
9
Artificial Hallucinations in ChatGPT: Implications in Scientific Writing.ChatGPT中的人工幻觉:对科学写作的影响
Cureus. 2023 Feb 19;15(2):e35179. doi: 10.7759/cureus.35179. eCollection 2023 Feb.
10
Evaluation of YouTube videos as sources of information about complex regional pain syndrome.评估YouTube视频作为复杂区域疼痛综合征信息来源的情况。
Korean J Pain. 2022 Jul 1;35(3):319-326. doi: 10.3344/kjp.2022.35.3.319.

基于大语言人工智能模型的复杂区域疼痛综合征患者教育材料的演变:患者有在学习吗?

Evolution of patient education materials from large-language artificial intelligence models on complex regional pain syndrome: are patients learning?

作者信息

Gupta Anuj, Basha Adil, Sontam Tarun R, Hlavinka William J, Croen Brett J, Abdou Cherry, Abdullah Mohammed, Hamilton Rita

机构信息

Texas A&M School of Medicine, Dallas, Texas, USA.

Department of Orthopedic Surgery, University of Pennsylvania Health System, Philadelphia, Pennsylvania, USA.

出版信息

Proc (Bayl Univ Med Cent). 2025 Feb 28;38(3):221-226. doi: 10.1080/08998280.2025.2470033. eCollection 2025.

DOI:10.1080/08998280.2025.2470033
PMID:40336903
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC12057770/
Abstract

OBJECTIVES

This study assessed the comprehensiveness and readability of medical information about complex regional pain syndrome provided by ChatGPT, an artificial intelligence (AI) chatbot, and Google using standardized scoring systems.

DESIGN

A Google search was conducted using the term "complex regional pain syndrome," and the first 10 frequently asked questions (FAQs) and answers generated were recorded. ChatGPT was presented these FAQs generated by Google, and its responses were evaluated alongside Google's answers using multiple metrics. ChatGPT was then asked to generate its own set of 10 FAQs and answers.

RESULTS

ChatGPT's answers were significantly longer than Google's in response to both independently generated questions (330.0 ± 51.3 words,  < 0.0001) and Google-generated questions (289.7 ± 40.6 words,  < 0.0001). ChatGPT's answers to Google-generated questions were more difficult to read based on the Flesch-Kincaid Reading Ease Score (13.6 ± 10.8,  = 0.017).

CONCLUSIONS

Our findings suggest that ChatGPT is a promising tool for patient education regarding complex regional pain syndrome based on its ability to generate a variety of question topics with responses from credible sources. That said, challenges such as readability and ethical considerations must be addressed prior to its widespread use for health information.

摘要

目的

本研究使用标准化评分系统评估了人工智能聊天机器人ChatGPT和谷歌提供的关于复杂性区域疼痛综合征的医学信息的全面性和可读性。

设计

使用术语“复杂性区域疼痛综合征”进行谷歌搜索,并记录生成的前10个常见问题及答案。将谷歌生成的这些常见问题呈现给ChatGPT,并使用多种指标将其回答与谷歌的答案一起进行评估。然后要求ChatGPT生成自己的一组10个常见问题及答案。

结果

ChatGPT对独立生成问题的回答(330.0±51.3个单词,<0.0001)和对谷歌生成问题的回答(289.7±40.6个单词,<0.0001)均明显长于谷歌的回答。根据弗莱施-金凯德易读性评分,ChatGPT对谷歌生成问题的回答更难读懂(13.6±10.8,=0.017)。

结论

我们的研究结果表明,ChatGPT基于其能够生成各种问题主题并提供可靠来源的回答,是用于患者关于复杂性区域疼痛综合征教育的一个有前途的工具。也就是说,在其广泛用于健康信息之前,必须解决可读性和伦理考量等挑战。