• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

评估ChatGPT在解答成人脊柱畸形手术患者问题时的准确性和可读性。

Evaluating the Accuracy and Readability of ChatGPT in Addressing Patient Queries on Adult Spinal Deformity Surgery.

作者信息

Hernandez Fergui, Guizar Rafael, Avetisian Henry, Abdou Marc A, Karakash William J, Ton Andy, Gallo Matthew C, Ball Jacob R, Wang Jeffrey C, Alluri Ram K, Hah Raymond J, Safaee Michael

机构信息

Department of Orthopaedic Surgery, Keck School of Medicine of the University of Southern California, Los Angeles, CA, USA.

Department of Orthopaedic Surgery, University of California, Irvine, CA, USA.

出版信息

Global Spine J. 2025 Jul 11:21925682251360655. doi: 10.1177/21925682251360655.

DOI:10.1177/21925682251360655
PMID:40643892
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC12254131/
Abstract

Study DesignCross-Sectional.ObjectivesAdult spinal deformity (ASD) affects 68% of the elderly, with surgical intervention carrying complication rates of up to 50%. Effective patient education is essential for managing expectations, yet high patient volumes can limit preoperative counseling. Language learning models (LLMs), such as ChatGPT, may supplement patient education. This study evaluates ChatGPT-3.5's accuracy and readability in answering common patient questions regarding ASD surgery.MethodsStructured interviews with ASD surgery patients identified 40 common preoperative questions, of which 19 were selected. Each question was posed to ChatGPT-3.5 in separate chat sessions to ensure independent responses. Three spine surgeons assessed response accuracy using a validated 4-point scale (1 = excellent, 4 = unsatisfactory). Readability was analyzed using the Flesch-Kincaid Grade Level formula.ResultsPatient inquiries fell into four themes: (1) Preoperative preparation, (2) Recovery (pain expectations, physical therapy), (3) Lifestyle modifications, and (4) Postoperative course. Accuracy scores varies: Preoperative responses averaged 1.67, Recovery and lifestyle responses 1.33, and postoperative responses 2.0. 59.7% of responses were excellent (no clarification needed), 26.3% were satisfactory (minimal clarification needed), 12.3% required moderate clarification, and 1.8% were unsatisfactory, with one response ("Will my pain return or worsen?") rated inaccurate by all reviewers. Readability analysis showed all 19 responses exceeded the eight-grade reading level by an average of 5.91 grade levels.ConclusionChatGPT-3.5 demonstrates potential as a supplemental patient education tool but provides varying accuracy and complex readability. While it may support patient understanding, the complexity of its responses may limit usefulness for individuals with lower health literacy.

摘要

研究设计

横断面研究。

目的

成人脊柱畸形(ASD)影响68%的老年人,手术干预的并发症发生率高达50%。有效的患者教育对于管理预期至关重要,但患者数量众多可能会限制术前咨询。语言学习模型(LLMs),如ChatGPT,可能会补充患者教育。本研究评估ChatGPT-3.5在回答有关ASD手术的常见患者问题时的准确性和可读性。

方法

对ASD手术患者进行结构化访谈,确定了40个常见的术前问题,从中选择了19个。每个问题在单独的聊天会话中向ChatGPT-3.5提出,以确保独立回答。三位脊柱外科医生使用经过验证的4分制量表(1 = 优秀,4 = 不满意)评估回答的准确性。使用弗莱施-金凯德年级水平公式分析可读性。

结果

患者的询问分为四个主题

(1)术前准备,(2)恢复(疼痛预期、物理治疗),(3)生活方式改变,以及(4)术后过程。准确性得分各不相同:术前回答平均为1.67,恢复和生活方式回答为1.33,术后回答为2.0。59.7%的回答为优秀(无需澄清),26.3%为满意(只需少量澄清),12.3%需要适度澄清,1.8%不满意,有一个回答(“我的疼痛会复发或加重吗?”)被所有评审人员评为不准确。可读性分析表明,所有19个回答平均超出八年级阅读水平5.91个年级。

结论

ChatGPT-3.5展示了作为补充患者教育工具的潜力,但提供的准确性各不相同且可读性复杂。虽然它可能有助于患者理解,但其回答的复杂性可能会限制对健康素养较低的个体的有用性。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/77f3/12254131/1de4f912052d/10.1177_21925682251360655-fig1.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/77f3/12254131/1de4f912052d/10.1177_21925682251360655-fig1.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/77f3/12254131/1de4f912052d/10.1177_21925682251360655-fig1.jpg

相似文献

1
Evaluating the Accuracy and Readability of ChatGPT in Addressing Patient Queries on Adult Spinal Deformity Surgery.评估ChatGPT在解答成人脊柱畸形手术患者问题时的准确性和可读性。
Global Spine J. 2025 Jul 11:21925682251360655. doi: 10.1177/21925682251360655.
2
Using Artificial Intelligence ChatGPT to Access Medical Information About Chemical Eye Injuries: Comparative Study.使用人工智能ChatGPT获取有关化学性眼外伤的医学信息:比较研究
JMIR Form Res. 2025 Aug 13;9:e73642. doi: 10.2196/73642.
3
Prescription of Controlled Substances: Benefits and Risks管制药品的处方:益处与风险
4
Is Information About Musculoskeletal Malignancies From Large Language Models or Web Resources at a Suitable Reading Level for Patients?来自大语言模型或网络资源的关于肌肉骨骼恶性肿瘤的信息对患者来说是否处于合适的阅读水平?
Clin Orthop Relat Res. 2025 Feb 1;483(2):306-315. doi: 10.1097/CORR.0000000000003263. Epub 2024 Sep 25.
5
Artificial Intelligence Shows Limited Success in Improving Readability Levels of Spanish-language Orthopaedic Patient Education Materials.人工智能在提高西班牙语骨科患者教育材料的可读性方面成效有限。
Clin Orthop Relat Res. 2025 Feb 11. doi: 10.1097/CORR.0000000000003413.
6
Enhancing the Readability of Online Patient Education Materials Using Large Language Models: Cross-Sectional Study.使用大语言模型提高在线患者教育材料的可读性:横断面研究。
J Med Internet Res. 2025 Jun 4;27:e69955. doi: 10.2196/69955.
7
Using Artificial Intelligence ChatGPT to Access Medical Information about Chemical Eye Injuries: A Comparative Study.使用人工智能ChatGPT获取有关化学性眼外伤的医学信息:一项比较研究。
JMIR Form Res. 2025 Jun 30. doi: 10.2196/73642.
8
Can Artificial Intelligence Improve the Readability of Patient Education Materials?人工智能能否提高患者教育材料的可读性?
Clin Orthop Relat Res. 2023 Nov 1;481(11):2260-2267. doi: 10.1097/CORR.0000000000002668. Epub 2023 Apr 28.
9
Artificial Intelligence in Peripheral Artery Disease Education: A Battle Between ChatGPT and Google Gemini.外周动脉疾病教育中的人工智能:ChatGPT与谷歌Gemini的较量
Cureus. 2025 Jun 1;17(6):e85174. doi: 10.7759/cureus.85174. eCollection 2025 Jun.
10
Assessing ChatGPT's Educational Potential in Lung Cancer Radiotherapy From Clinician and Patient Perspectives: Content Quality and Readability Analysis.从临床医生和患者角度评估ChatGPT在肺癌放疗中的教育潜力:内容质量与可读性分析
JMIR Cancer. 2025 Aug 13;11:e69783. doi: 10.2196/69783.

本文引用的文献

1
Is ChatGPT a trusted source of information for total hip and knee arthroplasty patients?ChatGPT 对全髋关节和膝关节置换患者来说是可靠的信息来源吗?
Bone Jt Open. 2024 Feb 15;5(2):139-146. doi: 10.1302/2633-1462.52.BJO-2023-0113.R1.
2
ChatGPT and large language models in orthopedics: from education and surgery to research.骨科领域的ChatGPT和大语言模型:从教育、手术到研究
J Exp Orthop. 2023 Dec 1;10(1):128. doi: 10.1186/s40634-023-00700-1.
3
Inclusive AI in Healthcare: Enhancing Bariatric Surgery Education for Diverse Patient Populations.
医疗保健中的包容性人工智能:加强针对不同患者群体的减重手术教育。
Obes Surg. 2024 Jan;34(1):270-271. doi: 10.1007/s11695-023-06969-6. Epub 2023 Nov 30.
4
ChatGPT in orthopedics: a narrative review exploring the potential of artificial intelligence in orthopedic practice.骨科领域的ChatGPT:一篇叙述性综述,探讨人工智能在骨科实践中的潜力
Front Surg. 2023 Nov 1;10:1284015. doi: 10.3389/fsurg.2023.1284015. eCollection 2023.
5
The Use of Large Language Models to Generate Education Materials about Uveitis.使用大型语言模型生成有关葡萄膜炎的教育材料。
Ophthalmol Retina. 2024 Feb;8(2):195-201. doi: 10.1016/j.oret.2023.09.008. Epub 2023 Sep 15.
6
Assessing ChatGPT Responses to Common Patient Questions Regarding Total Hip Arthroplasty.评估 ChatGPT 对全髋关节置换术常见患者问题的回答。
J Bone Joint Surg Am. 2023 Oct 4;105(19):1519-1526. doi: 10.2106/JBJS.23.00209. Epub 2023 Jul 17.
7
Assessing the Accuracy of Responses by the Language Model ChatGPT to Questions Regarding Bariatric Surgery.评估语言模型 ChatGPT 对肥胖症手术相关问题回答的准确性。
Obes Surg. 2023 Jun;33(6):1790-1796. doi: 10.1007/s11695-023-06603-5. Epub 2023 Apr 27.
8
Comparison Between ChatGPT and Google Search as Sources of Postoperative Patient Instructions.ChatGPT与谷歌搜索作为术后患者指导信息来源的比较
JAMA Otolaryngol Head Neck Surg. 2023 Jun 1;149(6):556-558. doi: 10.1001/jamaoto.2023.0704.
9
Fulfillment of Patient Expectations After Spine Surgery is Critical to Patient Satisfaction: A Cohort Study of Spine Surgery Patients.脊柱手术后满足患者期望对患者满意度至关重要:一项脊柱手术患者队列研究。
Neurosurgery. 2022 Jul 1;91(1):173-181. doi: 10.1227/neu.0000000000001981. Epub 2022 Apr 22.
10
Readability of Patient Education Materials From High-Impact Medical Journals: A 20-Year Analysis.高影响力医学期刊中患者教育材料的可读性:一项为期20年的分析。
J Patient Exp. 2021 Mar 3;8:2374373521998847. doi: 10.1177/2374373521998847. eCollection 2021.