White Christopher A, Kator Jamie L, Rhee Hannah S, Boucher Thomas, Glenn Rachel, Walsh Amanda, Kim Jaehon M
Department of Orthopaedic Surgery, Icahn School of Medicine at Mount Sinai, New York City, NY, United States.
Department of Orthopaedic Surgery, Icahn School of Medicine at Mount Sinai, New York City, NY, United States.
Hand Surg Rehabil. 2025 Apr;44(2):102082. doi: 10.1016/j.hansur.2025.102082. Epub 2025 Jan 9.
Patients are increasingly turning to the internet, and recently artificial intelligence engines (e.g., ChatGPT), for answers to common medical questions. Regarding orthopedic hand surgery, recent literature has focused on ChatGPT's ability to answer patient frequently asked questions (FAQs) regarding subjects such as carpal tunnel syndrome, distal radius fractures, and more. The present study seeks to determine how accurately ChatGPT can answer patient FAQs surrounding simple fracture patterns such as fifth metacarpal neck fractures.
Internet queries were used to identify the ten most FAQs regarding boxer's fractures based on information from five trusted healthcare institutions. These ten questions were posed to ChatGPT 4.0, and the chatbot's responses were recorded. Two fellowship trained orthopedic hand surgeons and one orthopedic hand surgery fellow then graded ChatGPT's responses on an alphabetical grading scale (i.e., A-F); additional commentary was then provided for each response. Descriptive statistics were used to report question, grader, and overall ChatGPT response grades.
ChatGPT achieved a cumulative grade of a B, indicating that the chatbot can provide adequate responses with only minor need for clarification when answering FAQs for boxer's fractures. Individual graders provided comparable overall grades of B, B, and B+ respectively. ChatGPT deferred to a medical professional in 7/10 responses. General questions were graded at an A-. Management questions were graded at a C+.
Overall, with a grade of B, ChatGPT 4.0 provides adequate-to- complete responses as it pertains to patient FAQs surrounding boxer's fractures.
患者越来越多地转向互联网,最近还借助人工智能引擎(如ChatGPT)来获取常见医学问题的答案。关于整形外科手部手术,最近的文献聚焦于ChatGPT回答患者关于腕管综合征、桡骨远端骨折等主题常见问题(FAQ)的能力。本研究旨在确定ChatGPT回答患者关于第五掌骨颈骨折等简单骨折类型常见问题的准确程度。
基于来自五个可靠医疗机构的信息,通过互联网查询确定关于拳击手骨折的十个最常见问题。将这十个问题抛给ChatGPT 4.0,并记录聊天机器人的回答。然后,两名接受过专科培训的整形外科手部外科医生和一名整形外科手部手术专科医生按照字母评分量表(即A - F)对ChatGPT的回答进行评分;随后对每个回答提供额外评论。使用描述性统计来报告问题、评分者以及ChatGPT总体回答的分数。
ChatGPT的累积评分为B,表明该聊天机器人在回答拳击手骨折的常见问题时,能够提供足够的回答,只需进行少量澄清。个别评分者分别给出了B、B和B +的类似总体评分。ChatGPT在十分之七的回答中向医学专业人员请教。一般问题评分为A -。管理问题评分为C +。
总体而言,ChatGPT 4.0的评分为B,在回答围绕拳击手骨折的患者常见问题方面提供了足够到完整的回答。