Gupta Suhasini, Haislup Brett D, Tyagi Anisha, Sudah Suleiman Y, Hoffman Ryan A, Murthi Anand M
T.H. Chan School of Medicine, University of Massachusetts, Worcester, MA, USA.
Department of Shoulder and Elbow Surgery, MedStar Union Memorial Hospital, Baltimore, MD, USA.
J Shoulder Elbow Surg. 2025 Feb 17. doi: 10.1016/j.jse.2024.12.048.
This study aims to analyze and compare the quality, accuracy, and readability of information regarding anatomic total shoulder arthroplasty (aTSA) and reverse total shoulder arthroplasty (rTSA) provided by various AI interfaces (Open AI's ChatGPT and Microsoft's CoPilot).
Thirty commonly asked questions (categorized by Rothwell criteria into Fact, Policy, and Value) by patients were inputted into ChatGPT 3.5 and CoPilot. Responses were assessed with the DISCERN scale, Journal of the American Medical Association (JAMA) benchmark criteria, and Flesch-Kincaid Reading Ease Score (FRES) and Flesch-Kincaid Grade Level (FKGL). The sources of citations provided by CoPilot were further analyzed.
Both AI interfaces generated DISCERN scores >50 (aTSA and rTSA ChatGPT: 57 [Fact], 61 [Policy], 58 [Value]; aTSA and rTSA CoPilot: 68 [Fact], 72 [Policy], 70 [Value]), demonstrating "good" quality of information provided, except for the Policy questions by CoPilot, which were scored as "excellent" (>70). CoPilot's higher JAMA score (3 vs. 0) and FRES scores >30 indicated more reliable, accessible responses, which required a minimum of 12th-grade education to read the same. In comparison, the ChatGPT generated more complex texts, with the majority of the FRES scores <20, and FKGL score signifying complexity of academic level text. Finally, CoPilot provided citations and demonstrated the highest percentage of academic sources (31.1% for rTSA and 26.7% for aTSA), suggesting reliable sources of information.
Overall, the information provided by both AI interfaces ChatGPT and CoPilot was scored as a "good" source of information for commonly asked patient questions regarding shoulder arthroplasty. But the answers to questions pertaining to shoulder arthroplasty provided by CoPilot proved to be more reliable (P = .0061), less complex, easier to read (P = .0031), and referenced information from reliable resources including academic sources, journal articles, and medical sites. Although answers provided by CoPilot were "easier" to read, they still required a 12th-grade education, which may be too complex for most patients, posing a challenge for patient comprehension. There were a substantial amount of nonmedical media sites, and commercial sources that were cited for both aTSA and rTSA questions by CoPilot. Critically, answers from both AI interfaces should serve as supplementary resources rather than primary sources on perioperative conditions pertaining to shoulder arthroplasty.
本研究旨在分析和比较各种人工智能界面(OpenAI的ChatGPT和微软的Copilot)提供的关于解剖型全肩关节置换术(aTSA)和反式全肩关节置换术(rTSA)信息的质量、准确性和可读性。
将患者提出的30个常见问题(根据罗斯韦尔标准分为事实、政策和价值三类)输入ChatGPT 3.5和Copilot。使用DISCERN量表、《美国医学会杂志》(JAMA)基准标准、弗莱什-金凯德阅读简易度评分(FRES)和弗莱什-金凯德年级水平(FKGL)对回答进行评估。进一步分析Copilot提供的引用来源。
两个人工智能界面生成的DISCERN评分均>50(aTSA和rTSA的ChatGPT:事实类57分、政策类61分、价值类58分;aTSA和rTSA的Copilot:事实类68分、政策类72分、价值类70分),表明所提供信息质量“良好”,但Copilot的政策类问题评分为“优秀”(>70)。Copilot较高的JAMA评分(3分对0分)和FRES评分>30表明其回答更可靠、易获取,阅读这些回答至少需要十二年级的教育水平。相比之下,ChatGPT生成的文本更复杂,大多数FRES评分<20,FKGL评分表明文本具有学术水平的复杂性。最后,Copilot提供了引用,并展示了最高比例的学术来源(rTSA为31.1%,aTSA为26.7%),表明信息来源可靠。
总体而言,ChatGPT和Copilot这两个人工智能界面提供的信息对于患者关于肩关节置换术的常见问题而言,被评为“良好”的信息来源。但事实证明,Copilot提供的与肩关节置换术相关问题的答案更可靠(P = 0.0061)、更简单、更易阅读(P = 0.0031),并且引用了包括学术来源、期刊文章和医学网站在内的可靠资源中的信息。尽管Copilot提供的答案“更易”阅读,但仍需要十二年级的教育水平,这对大多数患者来说可能过于复杂,对患者的理解构成挑战。Copilot在回答aTSA和rTSA问题时引用了大量非医学媒体网站和商业来源。至关重要的是,来自两个人工智能界面的答案都应作为关于肩关节置换术围手术期情况的补充资源,而非主要资源。