• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

相似文献

1
Is ChatGPT a trusted source of information for total hip and knee arthroplasty patients?ChatGPT 对全髋关节和膝关节置换患者来说是可靠的信息来源吗?
Bone Jt Open. 2024 Feb 15;5(2):139-146. doi: 10.1302/2633-1462.52.BJO-2023-0113.R1.
2
ChatGPT is an Unreliable Source of Peer-Reviewed Information for Common Total Knee and Hip Arthroplasty Patient Questions.对于全膝关节置换术和全髋关节置换术患者常见问题,ChatGPT是不可靠的同行评审信息来源。
Adv Orthop. 2025 Jan 6;2025:5534704. doi: 10.1155/aort/5534704. eCollection 2025.
3
Is ChatGPT a Reliable Source of Patient Information on Asthma?ChatGPT是哮喘患者信息的可靠来源吗?
Cureus. 2024 Jul 8;16(7):e64114. doi: 10.7759/cureus.64114. eCollection 2024 Jul.
4
Using a Google Web Search Analysis to Assess the Utility of ChatGPT in Total Joint Arthroplasty.利用谷歌网页搜索分析评估 ChatGPT 在全关节置换中的效用。
J Arthroplasty. 2023 Jul;38(7):1195-1202. doi: 10.1016/j.arth.2023.04.007. Epub 2023 Apr 10.
5
Evaluating the accuracy and readability of ChatGPT in providing parental guidance for adenoidectomy, tonsillectomy, and ventilation tube insertion surgery.评估 ChatGPT 在提供腺样体切除术、扁桃体切除术和通气管插入手术的家长指导方面的准确性和可读性。
Int J Pediatr Otorhinolaryngol. 2024 Jun;181:111998. doi: 10.1016/j.ijporl.2024.111998. Epub 2024 May 31.
6
Performance of Artificial Intelligence Chatbots on Glaucoma Questions Adapted From Patient Brochures.人工智能聊天机器人对改编自患者手册的青光眼问题的回答情况。
Cureus. 2024 Mar 23;16(3):e56766. doi: 10.7759/cureus.56766. eCollection 2024 Mar.
7
Assessing the Quality and Reliability of ChatGPT's Responses to Radiotherapy-Related Patient Queries: Comparative Study With GPT-3.5 and GPT-4.评估ChatGPT对放疗相关患者问题回答的质量和可靠性:与GPT-3.5和GPT-4的比较研究
JMIR Cancer. 2025 Apr 16;11:e63677. doi: 10.2196/63677.
8
Accuracy and Readability of Artificial Intelligence Chatbot Responses to Vasectomy-Related Questions: Public Beware.人工智能聊天机器人对输精管切除术相关问题回答的准确性和可读性:公众需谨慎。
Cureus. 2024 Aug 28;16(8):e67996. doi: 10.7759/cureus.67996. eCollection 2024 Aug.
9
Is Information About Musculoskeletal Malignancies From Large Language Models or Web Resources at a Suitable Reading Level for Patients?来自大语言模型或网络资源的关于肌肉骨骼恶性肿瘤的信息对患者来说是否处于合适的阅读水平?
Clin Orthop Relat Res. 2025 Feb 1;483(2):306-315. doi: 10.1097/CORR.0000000000003263. Epub 2024 Sep 25.
10
The Use of Large Language Models to Generate Education Materials about Uveitis.使用大型语言模型生成有关葡萄膜炎的教育材料。
Ophthalmol Retina. 2024 Feb;8(2):195-201. doi: 10.1016/j.oret.2023.09.008. Epub 2023 Sep 15.

引用本文的文献

1
ChatGPT-4.0 or DeepSeek-V3? Comparative analysis of answers to the most frequently asked questions by total knee replacement candidate patients.ChatGPT-4.0还是DeepSeek-V3?全膝关节置换候选患者常见问题答案的比较分析。
Medicine (Baltimore). 2025 Aug 22;104(34):e43951. doi: 10.1097/MD.0000000000043951.
2
To Self-Treat or Not to Self-Treat: Evaluating the Diagnostic, Advisory and Referral Effectiveness of ChatGPT Responses to the Most Common Musculoskeletal Disorders.自我治疗还是不自我治疗:评估ChatGPT对最常见肌肉骨骼疾病的诊断、咨询及转诊建议的有效性
Diagnostics (Basel). 2025 Jul 21;15(14):1834. doi: 10.3390/diagnostics15141834.
3
Evaluating the Accuracy and Readability of ChatGPT in Addressing Patient Queries on Adult Spinal Deformity Surgery.评估ChatGPT在解答成人脊柱畸形手术患者问题时的准确性和可读性。
Global Spine J. 2025 Jul 11:21925682251360655. doi: 10.1177/21925682251360655.
4
Evaluating if ChatGPT Can Answer Common Patient Questions Compared to OrthoInfo Regarding Lateral Epicondylitis.评估与OrthoInfo相比,ChatGPT能否回答有关外侧上髁炎的常见患者问题。
Iowa Orthop J. 2025;45(1):19-32.
5
Enhancing responses from large language models with role-playing prompts: a comparative study on answering frequently asked questions about total knee arthroplasty.通过角色扮演提示增强大语言模型的回答:关于全膝关节置换术常见问题解答的比较研究
BMC Med Inform Decis Mak. 2025 May 23;25(1):196. doi: 10.1186/s12911-025-03024-5.
6
Evaluation of ChatGPT Responses About Sexual Activity After Total Hip Arthroplasty: A Comparative Study with Observers of Different Experience Levels.评估ChatGPT对全髋关节置换术后性活动的回答:与不同经验水平观察者的对比研究。
J Clin Med. 2025 Apr 24;14(9):2942. doi: 10.3390/jcm14092942.
7
Performance of artificial intelligence chatbots in responding to the frequently asked questions of patients regarding dental prostheses.人工智能聊天机器人在回答患者有关假牙常见问题方面的表现。
BMC Oral Health. 2025 Apr 15;25(1):574. doi: 10.1186/s12903-025-05965-9.
8
Evaluating if ChatGPT Can Answer Common Patient Questions Compared With OrthoInfo Regarding Rotator Cuff Tears.评估ChatGPT与OrthoInfo相比能否回答有关肩袖撕裂的常见患者问题。
J Am Acad Orthop Surg Glob Res Rev. 2025 Mar 11;9(3). doi: 10.5435/JAAOSGlobal-D-24-00289. eCollection 2025 Mar 1.
9
Examining the Role of Large Language Models in Orthopedics: Systematic Review.检查大型语言模型在骨科中的作用:系统评价。
J Med Internet Res. 2024 Nov 15;26:e59607. doi: 10.2196/59607.
10
Large language models in patient education: a scoping review of applications in medicine.用于患者教育的大语言模型:医学应用的范围综述
Front Med (Lausanne). 2024 Oct 29;11:1477898. doi: 10.3389/fmed.2024.1477898. eCollection 2024.

本文引用的文献

1
What's all the chatter about?这是在议论什么呢?
Bone Joint J. 2023 Jun 1;105-B(6):587-589. doi: 10.1302/0301-620X.105B6.BJJ-2023-0156.
2
Comparing Physician and Artificial Intelligence Chatbot Responses to Patient Questions Posted to a Public Social Media Forum.比较医生和人工智能聊天机器人对发布在公共社交媒体论坛上的患者问题的回复。
JAMA Intern Med. 2023 Jun 1;183(6):589-596. doi: 10.1001/jamainternmed.2023.1838.
3
Assessing the performance of ChatGPT in answering questions regarding cirrhosis and hepatocellular carcinoma.评估 ChatGPT 在回答肝硬化和肝细胞癌相关问题方面的表现。
Clin Mol Hepatol. 2023 Jul;29(3):721-732. doi: 10.3350/cmh.2023.0089. Epub 2023 Mar 22.
4
Artificial intelligence-based chatbot patient information on common retinal diseases using ChatGPT.使用ChatGPT的基于人工智能的常见视网膜疾病聊天机器人患者信息。
Acta Ophthalmol. 2023 Nov;101(7):829-831. doi: 10.1111/aos.15661. Epub 2023 Mar 13.
5
Assessing the Accuracy and Reliability of AI-Generated Medical Responses: An Evaluation of the Chat-GPT Model.评估人工智能生成的医学回复的准确性和可靠性:对Chat-GPT模型的评估
Res Sq. 2023 Feb 28:rs.3.rs-2566942. doi: 10.21203/rs.3.rs-2566942/v1.
6
Diagnostic Accuracy of Differential-Diagnosis Lists Generated by Generative Pretrained Transformer 3 Chatbot for Clinical Vignettes with Common Chief Complaints: A Pilot Study.基于生成式预训练 Transformer 3 聊天机器人为常见主诉临床病例生成鉴别诊断列表的诊断准确性:一项初步研究。
Int J Environ Res Public Health. 2023 Feb 15;20(4):3378. doi: 10.3390/ijerph20043378.
7
Performance of ChatGPT on USMLE: Potential for AI-assisted medical education using large language models.ChatGPT在美国医师执照考试中的表现:使用大语言模型进行人工智能辅助医学教育的潜力。
PLOS Digit Health. 2023 Feb 9;2(2):e0000198. doi: 10.1371/journal.pdig.0000198. eCollection 2023 Feb.
8
Concerns surrounding application of artificial intelligence in hip and knee arthroplasty : a review of literature and recommendations for meaningful adoption.关于人工智能在髋关节和膝关节置换术中应用的担忧:文献综述及有意义应用的建议
Bone Joint J. 2022 Dec;104-B(12):1292-1303. doi: 10.1302/0301-620X.104B12.BJJ-2022-0922.R1.
9
Health-focused conversational agents in person-centered care: a review of apps.以患者为中心的护理中注重健康的对话代理:应用程序综述
NPJ Digit Med. 2022 Feb 17;5(1):21. doi: 10.1038/s41746-022-00560-6.
10
Highlights of the 2021 American Joint Replacement Registry Annual Report.2021年美国关节置换登记处年度报告要点。
Arthroplast Today. 2022 Jan 29;13:205-207. doi: 10.1016/j.artd.2022.01.020. eCollection 2022 Feb.

ChatGPT 对全髋关节和膝关节置换患者来说是可靠的信息来源吗?

Is ChatGPT a trusted source of information for total hip and knee arthroplasty patients?

作者信息

Wright Benjamin M, Bodnar Michael S, Moore Andrew D, Maseda Meghan C, Kucharik Michael P, Diaz Connor C, Schmidt Christian M, Mir Hassan R

机构信息

Morsani College of Medicine, University of South Florida, Tampa, Florida, USA.

Department of Orthopaedic Surgery, University of South Florida, Tampa, Florida, USA.

出版信息

Bone Jt Open. 2024 Feb 15;5(2):139-146. doi: 10.1302/2633-1462.52.BJO-2023-0113.R1.

DOI:10.1302/2633-1462.52.BJO-2023-0113.R1
PMID:38354748
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC10867788/
Abstract

AIMS

While internet search engines have been the primary information source for patients' questions, artificial intelligence large language models like ChatGPT are trending towards becoming the new primary source. The purpose of this study was to determine if ChatGPT can answer patient questions about total hip (THA) and knee arthroplasty (TKA) with consistent accuracy, comprehensiveness, and easy readability.

METHODS

We posed the 20 most Google-searched questions about THA and TKA, plus ten additional postoperative questions, to ChatGPT. Each question was asked twice to evaluate for consistency in quality. Following each response, we responded with, "Please explain so it is easier to understand," to evaluate ChatGPT's ability to reduce response reading grade level, measured as Flesch-Kincaid Grade Level (FKGL). Five resident physicians rated the 120 responses on 1 to 5 accuracy and comprehensiveness scales. Additionally, they answered a "yes" or "no" question regarding acceptability. Mean scores were calculated for each question, and responses were deemed acceptable if ≥ four raters answered "yes."

RESULTS

The mean accuracy and comprehensiveness scores were 4.26 (95% confidence interval (CI) 4.19 to 4.33) and 3.79 (95% CI 3.69 to 3.89), respectively. Out of all the responses, 59.2% (71/120; 95% CI 50.0% to 67.7%) were acceptable. ChatGPT was consistent when asked the same question twice, giving no significant difference in accuracy (t = 0.821; p = 0.415), comprehensiveness (t = 1.387; p = 0.171), acceptability (χ = 1.832; p = 0.176), and FKGL (t = 0.264; p = 0.793). There was a significantly lower FKGL (t = 2.204; p = 0.029) for easier responses (11.14; 95% CI 10.57 to 11.71) than original responses (12.15; 95% CI 11.45 to 12.85).

CONCLUSION

ChatGPT answered THA and TKA patient questions with accuracy comparable to previous reports of websites, with adequate comprehensiveness, but with limited acceptability as the sole information source. ChatGPT has potential for answering patient questions about THA and TKA, but needs improvement.

摘要

目的

虽然互联网搜索引擎一直是患者问题的主要信息来源,但像ChatGPT这样的人工智能大语言模型正逐渐成为新的主要信息源。本研究的目的是确定ChatGPT能否以一致的准确性、全面性和易读性回答患者关于全髋关节置换术(THA)和膝关节置换术(TKA)的问题。

方法

我们向ChatGPT提出了20个谷歌搜索量最高的关于THA和TKA的问题,以及另外10个术后问题。每个问题问两次以评估质量的一致性。在每次回答后,我们回复“请解释一下,以便更容易理解”,以评估ChatGPT降低回答阅读难度等级的能力,用弗莱什-金凯德等级水平(FKGL)衡量。五位住院医师在1至5的准确性和全面性量表上对120个回答进行评分。此外,他们回答了一个关于可接受性的“是”或“否”的问题。计算每个问题的平均得分,如果≥四名评分者回答“是”,则该回答被视为可接受。

结果

平均准确性和全面性得分分别为4.26(95%置信区间(CI)4.19至4.33)和3.79(95%CI 3.69至3.89)。在所有回答中,59.2%(71/120;95%CI 50.0%至67.7%)是可接受的。当同一个问题问ChatGPT两次时,其表现具有一致性,准确性(t = 0.821;p = 0.415)、全面性(t = 1.387;p = 0.171))、可接受性(χ = 1.832;p = 0.176)和FKGL(t = 0.264;p = 0.793)均无显著差异。与原始回答(12.15;95%CI 11.45至12.85)相比,更容易理解的回答(11.14;95%CI 10.57至11.71)的FKGL显著更低(t = 2.204;p = 0.029)。

结论

ChatGPT回答THA和TKA患者问题的准确性与之前网站的报告相当,具有足够的全面性,但作为唯一信息源的可接受性有限。ChatGPT有潜力回答患者关于THA和TKA的问题,但需要改进。