• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

相似文献

1
AI = Appropriate Insight? ChatGPT Appropriately Answers Parents' Questions for Common Pediatric Orthopaedic Conditions.人工智能 = 恰当的见解?ChatGPT 恰当地回答了家长关于常见小儿骨科病症的问题。
J Pediatr Soc North Am. 2024 Feb 5;5(4):762. doi: 10.55275/JPOSNA-2023-762. eCollection 2023 Nov.
2
ChatGPT and Google Gemini are Clinically Inadequate in Providing Recommendations on Management of Developmental Dysplasia of the Hip Compared to American Academy of Orthopaedic Surgeons Clinical Practice Guidelines.与美国矫形外科医师学会临床实践指南相比,ChatGPT和谷歌Gemini在提供髋关节发育不良管理建议方面存在临床不足。
J Pediatr Soc North Am. 2024 Dec 9;10:100135. doi: 10.1016/j.jposna.2024.100135. eCollection 2025 Feb.
3
Can ChatGPT reliably answer the most common patient questions regarding total shoulder arthroplasty?ChatGPT能否可靠地回答患者关于全肩关节置换术最常见的问题?
J Shoulder Elbow Surg. 2025 May;34(5):e254-e264. doi: 10.1016/j.jse.2024.08.025. Epub 2024 Oct 16.
4
Evaluating Chat Generative Pre-trained Transformer Responses to Common Pediatric In-toeing Questions.评估聊天生成预训练转换器对常见儿科内八字问题的回答。
J Pediatr Orthop. 2024 Aug 1;44(7):e592-e597. doi: 10.1097/BPO.0000000000002695. Epub 2024 Apr 30.
5
Artificial intelligence-powered chatbots in search engines: a cross-sectional study on the quality and risks of drug information for patients.搜索引擎中由人工智能驱动的聊天机器人:一项关于患者药物信息质量与风险的横断面研究。
BMJ Qual Saf. 2025 Jan 28;34(2):100-109. doi: 10.1136/bmjqs-2024-017476.
6
Use and Application of Large Language Models for Patient Questions Following Total Knee Arthroplasty.全膝关节置换术后患者问题的大语言模型应用与实践
J Arthroplasty. 2024 Sep;39(9):2289-2294. doi: 10.1016/j.arth.2024.03.017. Epub 2024 Mar 13.
7
Usefulness and Accuracy of Artificial Intelligence Chatbot Responses to Patient Questions for Neurosurgical Procedures.人工智能聊天机器人对神经外科手术患者问题回答的实用性和准确性
Neurosurgery. 2024 Feb 14. doi: 10.1227/neu.0000000000002856.
8
Accuracy of Prospective Assessments of 4 Large Language Model Chatbot Responses to Patient Questions About Emergency Care: Experimental Comparative Study.前瞻性评估 4 种大型语言模型聊天机器人对患者关于急救护理问题的回答的准确性:实验性对比研究。
J Med Internet Res. 2024 Nov 4;26:e60291. doi: 10.2196/60291.
9
Can ChatGPT 4.0 reliably answer patient frequently asked questions about boxer's fractures?ChatGPT 4.0能否可靠地回答患者关于拳击骨折的常见问题?
Hand Surg Rehabil. 2025 Apr;44(2):102082. doi: 10.1016/j.hansur.2025.102082. Epub 2025 Jan 9.
10
Inadequate Performance of ChatGPT on Orthopedic Board-Style Written Exams.ChatGPT在骨科委员会风格笔试中的表现不佳。
Cureus. 2024 Jun 18;16(6):e62643. doi: 10.7759/cureus.62643. eCollection 2024 Jun.

引用本文的文献

1
Assessing the role of large language models in adolescent idiopathic scoliosis care: a comparison between ChatGPT and Google Gemini.评估大语言模型在青少年特发性脊柱侧弯护理中的作用:ChatGPT与谷歌Gemini的比较
Acta Orthop Traumatol Turc. 2025 Jul 18;59(4):222-229. doi: 10.5152/j.aott.2025.25279.
2
Is it a pediatric orthopaedic urgency or not? Can ChatGPT answer this question?这是否属于小儿骨科急症?ChatGPT能回答这个问题吗?
J Orthop Surg Res. 2025 Jun 4;20(1):567. doi: 10.1186/s13018-025-05981-z.

本文引用的文献

1
Comparing Physician and Artificial Intelligence Chatbot Responses to Patient Questions Posted to a Public Social Media Forum.比较医生和人工智能聊天机器人对发布在公共社交媒体论坛上的患者问题的回复。
JAMA Intern Med. 2023 Jun 1;183(6):589-596. doi: 10.1001/jamainternmed.2023.1838.
2
Performance of an Artificial Intelligence Chatbot in Ophthalmic Knowledge Assessment.人工智能聊天机器人在眼科知识评估中的表现。
JAMA Ophthalmol. 2023 Jun 1;141(6):589-597. doi: 10.1001/jamaophthalmol.2023.1144.
3
Comparison Between ChatGPT and Google Search as Sources of Postoperative Patient Instructions.ChatGPT与谷歌搜索作为术后患者指导信息来源的比较
JAMA Otolaryngol Head Neck Surg. 2023 Jun 1;149(6):556-558. doi: 10.1001/jamaoto.2023.0704.
4
Patterns of parental online health information-seeking behaviour.父母在线健康信息搜索行为模式。
J Paediatr Child Health. 2023 May;59(5):743-752. doi: 10.1111/jpc.16387. Epub 2023 Apr 13.
5
Appropriateness of Cardiovascular Disease Prevention Recommendations Obtained From a Popular Online Chat-Based Artificial Intelligence Model.从一个基于在线聊天的流行人工智能模型获取的心血管疾病预防建议的适宜性。
JAMA. 2023 Mar 14;329(10):842-844. doi: 10.1001/jama.2023.1044.
6
Quality of YouTube videos for three common pediatric hip conditions: developmental hip dysplasia, slipped capital femoral epiphysis and Legg-Calve-Perthes disease.YouTube 视频在三种常见小儿髋关节疾病(发育性髋关节发育不良、股骨头骨骺滑脱和 Legg-Calvé-Perthes 病)中的质量。
J Pediatr Orthop B. 2022 Nov 1;31(6):546-553. doi: 10.1097/BPB.0000000000000972. Epub 2022 Mar 31.
7
Readability of Patient Educational Materials in Pediatric Orthopaedics.儿科骨科患者教育材料的可读性。
J Bone Joint Surg Am. 2021 Jun 16;103(12):e47. doi: 10.2106/JBJS.20.01347.
8
YouTube as an information source for clubfoot: a quality analysis of video content.YouTube 作为足内翻信息源:视频内容的质量分析。
J Pediatr Orthop B. 2020 Jul;29(4):375-378. doi: 10.1097/BPB.0000000000000694.
9
Parental internet search in the field of pediatric orthopedics.家长在小儿矫形领域的互联网搜索行为。
Eur J Pediatr. 2019 Jun;178(6):929-935. doi: 10.1007/s00431-019-03369-w. Epub 2019 Apr 10.
10
Quality of online pediatric orthopaedic education materials.在线儿科骨科教育材料的质量。
J Bone Joint Surg Am. 2014 Dec 3;96(23):e194. doi: 10.2106/JBJS.N.00043.

人工智能 = 恰当的见解?ChatGPT 恰当地回答了家长关于常见小儿骨科病症的问题。

AI = Appropriate Insight? ChatGPT Appropriately Answers Parents' Questions for Common Pediatric Orthopaedic Conditions.

作者信息

Zusman Natalie L, Bauer Matthew, Mann Jennah, Goldstein Rachel Y

机构信息

Jackie and Gene Autry Orthopedic Center, Children's Hospital Los Angeles, Los Angeles, CA.

出版信息

J Pediatr Soc North Am. 2024 Feb 5;5(4):762. doi: 10.55275/JPOSNA-2023-762. eCollection 2023 Nov.

DOI:10.55275/JPOSNA-2023-762
PMID:40432947
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC12088153/
Abstract

Artificial intelligence services, such as ChatGPT (generative pre-trained transformer), can provide parents with tailored responses to their pediatric orthopaedic concerns. We undertook a qualitative study to assess the accuracy of the answer provided by ChatGPT in comparison to OrthoKids ("OK"), a patient-facing educational platform governed by the Pediatric Orthopaedic Society of North America (POSNA) for common pediatric orthopaedic conditions. A cross-sectional study was performed from May 26 to June 18, 2023. OK website (orthokids.org) was reviewed and 30 existing questions were collected. The corresponding OK and ChatGPT responses were recorded. Two pediatric orthopaedic surgeons assessed the answer provided by ChatGPT against the OK response. Answers were graded as: AGREE (accurate information; question addressed in full), NEUTRAL (accurate information; question not answered), DISAGREE (information was inaccurate or could be detrimental to patients' health). The evaluators' responses were compiled; discrepancies were adjudicated by a third pediatric orthopaedist. Additional chatbot answer characteristics such as unprompted treatment recommendations, bias, and referral to a healthcare provider were recorded. Data was analyzed using descriptive statistics. The chatbot's answers were agreed upon in 93% of questions. Two responses were felt to be neutral. No responses met disagreement. Unprompted treatment recommendations were included in 55% of its responses (excluding treatment-specific questions). The chatbot encouraged users to "consult with a healthcare professional" in all responses. It was nearly an equal split between recommending a generic provider (46%) in contrast to specifically stating a pediatric orthopaedist (54%). The chatbot was inconsistent in related topics in its provider recommendations, such as recommending a pediatric orthopaedist in 3 of 5 spine conditions. Questions pertaining to common pediatric orthopaedic conditions were accurately represented by a chatbot in comparison to a specialty society-governed website. The knowledge that chatbots deliver appropriate responses is reassuring. However, the chatbot frequently offered unsolicited treatment recommendations whilst simultaneously inconsistently recommending an orthopaedic consultation. We urge caution to parents utilizing artificial intelligence without also consulting a healthcare professional. IV •Artificial intelligence chatbots are becoming increasingly popular, as demonstrated by the rapid rise of publications on the topic in the last 3 months, and they represent a novel patient education online platform.•In comparing 30 common pediatric orthopaedic conditions, >90% of the chatbot's responses were felt to be in agreement with a specialty society's parent-patient-facing education platform.•The chatbot's responses were largely unbiased and referred patients to a healthcare professional. However, the responses lacked references or citing sources for the provided information.

摘要

人工智能服务,如ChatGPT(生成式预训练变换器),可以为家长提供针对其小儿骨科问题的定制化回复。我们进行了一项定性研究,以评估ChatGPT提供的答案与OrthoKids(“OK”)相比的准确性,OrthoKids是一个由北美小儿骨科学会(POSNA)管理的面向患者的常见小儿骨科疾病教育平台。2023年5月26日至6月18日进行了一项横断面研究。对OK网站(orthokids.org)进行了审查,并收集了30个现有问题。记录了相应的OK和ChatGPT回复。两名小儿骨科外科医生根据OK回复评估ChatGPT提供的答案。答案分为:同意(信息准确;问题得到全面解答)、中立(信息准确;问题未得到解答)、不同意(信息不准确或可能对患者健康有害)。汇总评估者的回复;差异由第三位小儿骨科医生裁决。记录了聊天机器人答案的其他特征,如主动提供的治疗建议、偏差以及转介给医疗服务提供者。使用描述性统计分析数据。聊天机器人在93%的问题上答案得到认可。有两个回复被认为是中立的。没有回复被判定为不同意。其回复中有55%包含主动提供的治疗建议(不包括特定治疗问题)。聊天机器人在所有回复中都鼓励用户“咨询医疗专业人员”。在推荐普通医疗服务提供者(46%)与明确推荐小儿骨科医生(54%)之间几乎平分秋色。聊天机器人在其推荐医疗服务提供者的相关主题上不一致,例如在5种脊柱疾病中的3种中推荐小儿骨科医生。与一个由专业学会管理的网站相比,聊天机器人对常见小儿骨科疾病相关问题的回答较为准确。知道聊天机器人能提供恰当回复令人安心。然而,聊天机器人经常主动提供治疗建议,同时在推荐骨科会诊方面也不一致。我们敦促家长在使用人工智能时谨慎行事,同时也要咨询医疗专业人员。四、人工智能聊天机器人越来越受欢迎,过去3个月关于该主题的出版物迅速增加就证明了这一点,它们代表了一个新型的在线患者教育平台。•在比较30种常见小儿骨科疾病时,超过90%的聊天机器人回复被认为与一个专业学会面向家长和患者的教育平台一致。•聊天机器人的回复基本无偏差,并将患者转介给医疗专业人员。然而,回复中缺乏对所提供信息的参考文献或引用来源。