Suppr超能文献

基于大语言人工智能模型的复杂区域疼痛综合征患者教育材料的演变:患者有在学习吗?

Evolution of patient education materials from large-language artificial intelligence models on complex regional pain syndrome: are patients learning?

作者信息

Gupta Anuj, Basha Adil, Sontam Tarun R, Hlavinka William J, Croen Brett J, Abdou Cherry, Abdullah Mohammed, Hamilton Rita

机构信息

Texas A&M School of Medicine, Dallas, Texas, USA.

Department of Orthopedic Surgery, University of Pennsylvania Health System, Philadelphia, Pennsylvania, USA.

出版信息

Proc (Bayl Univ Med Cent). 2025 Feb 28;38(3):221-226. doi: 10.1080/08998280.2025.2470033. eCollection 2025.

Abstract

OBJECTIVES

This study assessed the comprehensiveness and readability of medical information about complex regional pain syndrome provided by ChatGPT, an artificial intelligence (AI) chatbot, and Google using standardized scoring systems.

DESIGN

A Google search was conducted using the term "complex regional pain syndrome," and the first 10 frequently asked questions (FAQs) and answers generated were recorded. ChatGPT was presented these FAQs generated by Google, and its responses were evaluated alongside Google's answers using multiple metrics. ChatGPT was then asked to generate its own set of 10 FAQs and answers.

RESULTS

ChatGPT's answers were significantly longer than Google's in response to both independently generated questions (330.0 ± 51.3 words,  < 0.0001) and Google-generated questions (289.7 ± 40.6 words,  < 0.0001). ChatGPT's answers to Google-generated questions were more difficult to read based on the Flesch-Kincaid Reading Ease Score (13.6 ± 10.8,  = 0.017).

CONCLUSIONS

Our findings suggest that ChatGPT is a promising tool for patient education regarding complex regional pain syndrome based on its ability to generate a variety of question topics with responses from credible sources. That said, challenges such as readability and ethical considerations must be addressed prior to its widespread use for health information.

摘要

目的

本研究使用标准化评分系统评估了人工智能聊天机器人ChatGPT和谷歌提供的关于复杂性区域疼痛综合征的医学信息的全面性和可读性。

设计

使用术语“复杂性区域疼痛综合征”进行谷歌搜索,并记录生成的前10个常见问题及答案。将谷歌生成的这些常见问题呈现给ChatGPT,并使用多种指标将其回答与谷歌的答案一起进行评估。然后要求ChatGPT生成自己的一组10个常见问题及答案。

结果

ChatGPT对独立生成问题的回答(330.0±51.3个单词,<0.0001)和对谷歌生成问题的回答(289.7±40.6个单词,<0.0001)均明显长于谷歌的回答。根据弗莱施-金凯德易读性评分,ChatGPT对谷歌生成问题的回答更难读懂(13.6±10.8,=0.017)。

结论

我们的研究结果表明,ChatGPT基于其能够生成各种问题主题并提供可靠来源的回答,是用于患者关于复杂性区域疼痛综合征教育的一个有前途的工具。也就是说,在其广泛用于健康信息之前,必须解决可读性和伦理考量等挑战。

相似文献

1
Evolution of patient education materials from large-language artificial intelligence models on complex regional pain syndrome: are patients learning?
Proc (Bayl Univ Med Cent). 2025 Feb 28;38(3):221-226. doi: 10.1080/08998280.2025.2470033. eCollection 2025.
5
ChatGPT as a Source of Patient Information for Lumbar Spinal Fusion and Laminectomy: A Comparative Analysis Against Google Web Search.
Clin Spine Surg. 2024 Dec 1;37(10):E394-E403. doi: 10.1097/BSD.0000000000001582. Epub 2024 Feb 20.
6
8
Comparing the Accuracy and Readability of Glaucoma-related Question Responses and Educational Materials by Google and ChatGPT.
J Curr Glaucoma Pract. 2024 Jul-Sep;18(3):110-116. doi: 10.5005/jp-journals-10078-1448. Epub 2024 Oct 29.

本文引用的文献

1
Comparative Accuracy of ChatGPT 4.0 and Google Gemini in Answering Pediatric Radiology Text-Based Questions.
Cureus. 2024 Oct 5;16(10):e70897. doi: 10.7759/cureus.70897. eCollection 2024 Oct.
3
Are large language models a useful resource to address common patient concerns on hallux valgus? A readability analysis.
Foot Ankle Surg. 2025 Jan;31(1):15-19. doi: 10.1016/j.fas.2024.08.002. Epub 2024 Aug 6.
5
Performance of ChatGPT on NASS Clinical Guidelines for the Diagnosis and Treatment of Low Back Pain: A Comparison Study.
Spine (Phila Pa 1976). 2024 May 1;49(9):640-651. doi: 10.1097/BRS.0000000000004915. Epub 2024 Jan 12.
6
The complex regional pain syndrome: Diagnosis and management strategies.
Neurosciences (Riyadh). 2023 Oct;28(4):211-219. doi: 10.17712/nsj.2023.4.20230034.
8
Performance of ChatGPT on USMLE: Potential for AI-assisted medical education using large language models.
PLOS Digit Health. 2023 Feb 9;2(2):e0000198. doi: 10.1371/journal.pdig.0000198. eCollection 2023 Feb.
9
Artificial Hallucinations in ChatGPT: Implications in Scientific Writing.
Cureus. 2023 Feb 19;15(2):e35179. doi: 10.7759/cureus.35179. eCollection 2023 Feb.
10
Evaluation of YouTube videos as sources of information about complex regional pain syndrome.
Korean J Pain. 2022 Jul 1;35(3):319-326. doi: 10.3344/kjp.2022.35.3.319.

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验