文献检索文档翻译深度研究
Suppr Zotero 插件Zotero 插件
邀请有礼套餐&价格历史记录

新学期,新优惠

限时优惠:9月1日-9月22日

30天高级会员仅需29元

1天体验卡首发特惠仅需5.99元

了解详情
不再提醒
插件&应用
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
高级版
套餐订阅购买积分包
AI 工具
文献检索文档翻译深度研究
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2025

评估ChatGPT对关于汤米·约翰手术的常见患者问题的回答。

Appraisal of ChatGPT's responses to common patient questions regarding Tommy John surgery.

作者信息

Shaari Ariana L, Fano Adam N, Anakwenze Oke, Klifto Christopher

机构信息

Rutgers New Jersey Medical School, Newark, New Jersey, USA.

Thomas Jefferson University Hospital, Philadelphia, Pennsylvania, USA.

出版信息

Shoulder Elbow. 2024 Jul;16(4):429-435. doi: 10.1177/17585732241259754. Epub 2024 Sep 20.


DOI:10.1177/17585732241259754
PMID:39318412
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC11418706/
Abstract

BACKGROUND: Artificial intelligence (AI) has progressed at a fast pace. ChatGPT, a rapidly expanding AI platform, has several growing applications in medicine and patient care. However, its ability to provide high-quality answers to patient questions about orthopedic procedures such as Tommy John surgery is unknown. Our objective is to evaluate the quality of information provided by ChatGPT 3.5 and 4.0 in response to patient questions regarding Tommy John surgery. METHODS: Twenty-five patient questions regarding Tommy John surgery were posed to ChatGPT 3.5 and 4.0. Readability was assessed via Flesch Kincaid Reading Ease, Flesh Kinkaid Grade Level, Gunning Fog Score, Simple Measure of Gobbledygook, Coleman Liau, and Automated Readability Index. The quality of each response was graded using a 5-point Likert scale. RESULTS: ChatGPT generated information at an educational level that greatly exceeds the recommended level. ChatGPT 4.0 produced slightly better responses to common questions regarding Tommy John surgery with fewer inaccuracies than ChatGPT 3.5. CONCLUSION: Although ChatGPT can provide accurate information regarding Tommy John surgery, its responses may not be easily comprehended by the average patient. As AI platforms become more accessible to the public, patients must be aware of their limitations.

摘要

背景:人工智能(AI)发展迅速。ChatGPT作为一个迅速扩张的人工智能平台,在医学和患者护理领域有越来越多的应用。然而,其针对患者有关诸如汤米·约翰手术等骨科手术问题提供高质量答案的能力尚不清楚。我们的目标是评估ChatGPT 3.5和4.0针对患者有关汤米·约翰手术问题所提供信息的质量。 方法:向ChatGPT 3.5和4.0提出25个有关汤米·约翰手术的患者问题。通过弗莱什·金凯德易读性、弗莱什·金凯德年级水平、冈宁雾度得分、简化的晦涩难懂度量、科尔曼·廖指数和自动可读性指数来评估可读性。每个回答的质量使用5点李克特量表进行评分。 结果:ChatGPT生成的信息在教育水平上大大超过了推荐水平。ChatGPT 4.0对有关汤米·约翰手术的常见问题给出的回答略好,不准确之处比ChatGPT 3.5少。 结论:虽然ChatGPT可以提供有关汤米·约翰手术的准确信息,但其回答普通患者可能不易理解。随着人工智能平台越来越为公众所用,患者必须意识到它们的局限性。

相似文献

[1]
Appraisal of ChatGPT's responses to common patient questions regarding Tommy John surgery.

Shoulder Elbow. 2024-7

[2]
American Academy of Orthopaedic Surgeons OrthoInfo provides more readable information regarding rotator cuff injury than ChatGPT.

J ISAKOS. 2025-2-12

[3]
Artificial Intelligence in Peripheral Artery Disease Education: A Battle Between ChatGPT and Google Gemini.

Cureus. 2025-6-1

[4]
Chat Generative Pretraining Transformer Answers Patient-focused Questions in Cervical Spine Surgery.

Clin Spine Surg. 2024-7-1

[5]
Accuracy and Readability of ChatGPT Responses to Patient-Centric Strabismus Questions.

J Pediatr Ophthalmol Strabismus. 2025

[6]
Can ChatGPT provide parent education for oral immunotherapy?

Ann Allergy Asthma Immunol. 2025-7

[7]
Can Patients Rely on ChatGPT to Answer Hand Pathology-Related Medical Questions?

Hand (N Y). 2024-4-23

[8]
Evaluating the readability, quality, and reliability of responses generated by ChatGPT, Gemini, and Perplexity on the most commonly asked questions about Ankylosing spondylitis.

PLoS One. 2025-6-18

[9]
A structured evaluation of LLM-generated step-by-step instructions in cadaveric brachial plexus dissection.

BMC Med Educ. 2025-7-1

[10]
Artificial Intelligence Shows Limited Success in Improving Readability Levels of Spanish-language Orthopaedic Patient Education Materials.

Clin Orthop Relat Res. 2025-2-11

引用本文的文献

[1]
Evaluation of Large Language Models' Concordance With Guidelines on Olfaction.

Laryngoscope Investig Otolaryngol. 2025-3-22

本文引用的文献

[1]
Readability of Online Information on Core Decompression of the Hip for Avascular Necrosis.

Cureus. 2023-12-10

[2]
Graft choice and techniques used in elbow ulnar collateral ligament reconstruction over the last 20 years: a systematic review and meta-analysis.

J Shoulder Elbow Surg. 2024-5

[3]
Appropriateness and Reliability of an Online Artificial Intelligence Platform's Responses to Common Questions Regarding Distal Radius Fractures.

J Hand Surg Am. 2024-2

[4]
Navigating Generative AI: Opportunities, Limitations, and Ethical Considerations in Massage Therapy and Beyond.

Int J Ther Massage Bodywork. 2023-12-1

[5]
Quality and Readability Analysis of Online Information on First Metatarsophalangeal Joint Fusion.

J Foot Ankle Surg. 2024

[6]
YouTube videos on ulnar collateral ligament reconstruction are highly variable in terms of reliability and quality: A quantitative analysis.

Shoulder Elbow. 2023-12

[7]
Optimizing Ophthalmology Patient Education via ChatBot-Generated Materials: Readability Analysis of AI-Generated Patient Education Materials and The American Society of Ophthalmic Plastic and Reconstructive Surgery Patient Brochures.

Ophthalmic Plast Reconstr Surg.

[8]
Health literacy in rotator cuff repair: a quantitative assessment of the understandability of online patient education material.

JSES Int. 2023-7-17

[9]
A Readability Analysis of Online Spondylolisthesis and Spondylolysis Patient Resources Among Pediatric Hospital Web Pages: A US-Based Study.

J Am Acad Orthop Surg Glob Res Rev. 2023-11-1

[10]
Online Patient Education Resources for Anterior Cruciate Ligament Reconstruction: An Assessment of the Accuracy and Reliability of Information on the Internet Over the Past Decade.

Cureus. 2023-10-6

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

推荐工具

医学文档翻译智能文献检索