Suppr超能文献

评估ChatGPT对关于汤米·约翰手术的常见患者问题的回答。

Appraisal of ChatGPT's responses to common patient questions regarding Tommy John surgery.

作者信息

Shaari Ariana L, Fano Adam N, Anakwenze Oke, Klifto Christopher

机构信息

Rutgers New Jersey Medical School, Newark, New Jersey, USA.

Thomas Jefferson University Hospital, Philadelphia, Pennsylvania, USA.

出版信息

Shoulder Elbow. 2024 Jul;16(4):429-435. doi: 10.1177/17585732241259754. Epub 2024 Sep 20.

Abstract

BACKGROUND

Artificial intelligence (AI) has progressed at a fast pace. ChatGPT, a rapidly expanding AI platform, has several growing applications in medicine and patient care. However, its ability to provide high-quality answers to patient questions about orthopedic procedures such as Tommy John surgery is unknown. Our objective is to evaluate the quality of information provided by ChatGPT 3.5 and 4.0 in response to patient questions regarding Tommy John surgery.

METHODS

Twenty-five patient questions regarding Tommy John surgery were posed to ChatGPT 3.5 and 4.0. Readability was assessed via Flesch Kincaid Reading Ease, Flesh Kinkaid Grade Level, Gunning Fog Score, Simple Measure of Gobbledygook, Coleman Liau, and Automated Readability Index. The quality of each response was graded using a 5-point Likert scale.

RESULTS

ChatGPT generated information at an educational level that greatly exceeds the recommended level. ChatGPT 4.0 produced slightly better responses to common questions regarding Tommy John surgery with fewer inaccuracies than ChatGPT 3.5.

CONCLUSION

Although ChatGPT can provide accurate information regarding Tommy John surgery, its responses may not be easily comprehended by the average patient. As AI platforms become more accessible to the public, patients must be aware of their limitations.

摘要

背景

人工智能(AI)发展迅速。ChatGPT作为一个迅速扩张的人工智能平台,在医学和患者护理领域有越来越多的应用。然而,其针对患者有关诸如汤米·约翰手术等骨科手术问题提供高质量答案的能力尚不清楚。我们的目标是评估ChatGPT 3.5和4.0针对患者有关汤米·约翰手术问题所提供信息的质量。

方法

向ChatGPT 3.5和4.0提出25个有关汤米·约翰手术的患者问题。通过弗莱什·金凯德易读性、弗莱什·金凯德年级水平、冈宁雾度得分、简化的晦涩难懂度量、科尔曼·廖指数和自动可读性指数来评估可读性。每个回答的质量使用5点李克特量表进行评分。

结果

ChatGPT生成的信息在教育水平上大大超过了推荐水平。ChatGPT 4.0对有关汤米·约翰手术的常见问题给出的回答略好,不准确之处比ChatGPT 3.5少。

结论

虽然ChatGPT可以提供有关汤米·约翰手术的准确信息,但其回答普通患者可能不易理解。随着人工智能平台越来越为公众所用,患者必须意识到它们的局限性。

相似文献

1
Appraisal of ChatGPT's responses to common patient questions regarding Tommy John surgery.
Shoulder Elbow. 2024 Jul;16(4):429-435. doi: 10.1177/17585732241259754. Epub 2024 Sep 20.
3
Artificial Intelligence in Peripheral Artery Disease Education: A Battle Between ChatGPT and Google Gemini.
Cureus. 2025 Jun 1;17(6):e85174. doi: 10.7759/cureus.85174. eCollection 2025 Jun.
4
Chat Generative Pretraining Transformer Answers Patient-focused Questions in Cervical Spine Surgery.
Clin Spine Surg. 2024 Jul 1;37(6):E278-E281. doi: 10.1097/BSD.0000000000001600. Epub 2024 Mar 21.
5
Accuracy and Readability of ChatGPT Responses to Patient-Centric Strabismus Questions.
J Pediatr Ophthalmol Strabismus. 2025 May-Jun;62(3):220-227. doi: 10.3928/01913913-20250110-02. Epub 2025 Feb 19.
6
Can ChatGPT provide parent education for oral immunotherapy?
Ann Allergy Asthma Immunol. 2025 Jul;135(1):87-90. doi: 10.1016/j.anai.2025.04.011. Epub 2025 Apr 24.
7
Can Patients Rely on ChatGPT to Answer Hand Pathology-Related Medical Questions?
Hand (N Y). 2024 Apr 23:15589447241247246. doi: 10.1177/15589447241247246.

引用本文的文献

1
Evaluation of Large Language Models' Concordance With Guidelines on Olfaction.
Laryngoscope Investig Otolaryngol. 2025 Mar 22;10(2):e70130. doi: 10.1002/lio2.70130. eCollection 2025 Apr.

本文引用的文献

1
Readability of Online Information on Core Decompression of the Hip for Avascular Necrosis.
Cureus. 2023 Dec 10;15(12):e50298. doi: 10.7759/cureus.50298. eCollection 2023 Dec.
2
4
Navigating Generative AI: Opportunities, Limitations, and Ethical Considerations in Massage Therapy and Beyond.
Int J Ther Massage Bodywork. 2023 Dec 1;16(4):1-4. doi: 10.3822/ijtmb.v16i4.949. eCollection 2023 Dec.
5
Quality and Readability Analysis of Online Information on First Metatarsophalangeal Joint Fusion.
J Foot Ankle Surg. 2024 Mar-Apr;63(2):256-261. doi: 10.1053/j.jfas.2023.11.013. Epub 2023 Dec 2.
8
Health literacy in rotator cuff repair: a quantitative assessment of the understandability of online patient education material.
JSES Int. 2023 Jul 17;7(6):2344-2348. doi: 10.1016/j.jseint.2023.06.016. eCollection 2023 Nov.
9
A Readability Analysis of Online Spondylolisthesis and Spondylolysis Patient Resources Among Pediatric Hospital Web Pages: A US-Based Study.
J Am Acad Orthop Surg Glob Res Rev. 2023 Nov 15;7(11). doi: 10.5435/JAAOSGlobal-D-23-00177. eCollection 2023 Nov 1.

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验