Suppr超能文献

评估ChatGPT与中国梅尼埃病临床指南的一致性。

Evaluating ChatGPT's Concordance with Clinical Guidelines of Ménière's Disease in Chinese.

作者信息

Lin Mien-Jen, Hsieh Li-Chun, Chen Chin-Kuo

机构信息

Department of Medical Education, Chang Gung Memorial Hospital, Taoyuan 33305, Taiwan.

Department of Otolaryngology-Head and Neck Surgery, Mackay Memorial Hospital, Taipei City 10449, Taiwan.

出版信息

Diagnostics (Basel). 2025 Aug 11;15(16):2006. doi: 10.3390/diagnostics15162006.

Abstract

: Generative AI (GenAI) models like ChatGPT have gained significant attention in recent years for their potential applications in healthcare. This study evaluates the concordance of responses generated by ChatGPT (versions 3.5 and 4.0) with the key action statements from the American Academy of Otolaryngology-Head and Neck Surgery (AAO-HNS) clinical practice guidelines (CPGs) for Ménière's disease translated into Chinese. : Seventeen questions derived from the KAS were translated into Chinese and posed to ChatGPT versions 3.5 and 4.0. Responses were categorized as correct, partially correct, incorrect, or non-answers. Concordance with the guidelines was evaluated, and Fisher's exact test assessed statistical differences, with significance set at < 0.05. Comparative analysis between ChatGPT 3.5 and 4.0 was performed. : ChatGPT 3.5 demonstrated an 82.4% correctness rate (14 correct, 2 partially correct, 1 non-answer), while ChatGPT 4.0 achieved 94.1% (16 correct, 1 partially correct). Overall, 97.1% of responses were correct or partially correct. ChatGPT 4.0 offered enhanced citation accuracy and text clarity but occasionally included redundant details. No significant difference in correctness rates was observed between the models ( = 0.6012). : Both ChatGPT models showed high concordance with the AAO-HNS CPG for MD, with ChatGPT 4.0 exhibiting superior text clarity and citation accuracy. These findings highlight ChatGPT's potential as a reliable assistant for better healthcare communication and clinical operations. Future research should validate these results across broader medical topics and languages to ensure robust integration of GenAI in healthcare.

摘要

近年来,像ChatGPT这样的生成式人工智能(GenAI)模型因其在医疗保健领域的潜在应用而备受关注。本研究评估了ChatGPT(3.5版和4.0版)生成的回答与美国耳鼻咽喉头颈外科学会(AAO-HNS)梅尼埃病临床实践指南(CPG)翻译成中文后的关键行动声明的一致性。

从关键行动声明中衍生出的17个问题被翻译成中文,并向ChatGPT 3.5版和4.0版提出。回答被分类为正确、部分正确、错误或无回答。评估了与指南的一致性,并使用Fisher精确检验评估统计差异,显著性设定为<0.05。对ChatGPT 3.5版和4.0版进行了比较分析。

ChatGPT 3.5版的正确率为82.4%(14个正确,2个部分正确,1个无回答),而ChatGPT 4.0版的正确率为94.1%(16个正确,1个部分正确)。总体而言,97.1%的回答是正确或部分正确的。ChatGPT 4.0版的引用准确性和文本清晰度有所提高,但偶尔会包含冗余细节。两个模型在正确率上没有观察到显著差异(P = 0.6012)。

两个ChatGPT模型与AAO-HNS关于梅尼埃病的CPG都显示出高度一致性,ChatGPT 4.0版在文本清晰度和引用准确性方面表现更优。这些发现凸显了ChatGPT作为改善医疗保健沟通和临床操作的可靠助手的潜力。未来的研究应在更广泛的医学主题和语言中验证这些结果,以确保GenAI在医疗保健中的稳健整合。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b6ca/12385325/17f7a139e917/diagnostics-15-02006-g001.jpg

相似文献

1
Evaluating ChatGPT's Concordance with Clinical Guidelines of Ménière's Disease in Chinese.
Diagnostics (Basel). 2025 Aug 11;15(16):2006. doi: 10.3390/diagnostics15162006.
2
Artificial Intelligence in Peripheral Artery Disease Education: A Battle Between ChatGPT and Google Gemini.
Cureus. 2025 Jun 1;17(6):e85174. doi: 10.7759/cureus.85174. eCollection 2025 Jun.
4
"Dr. AI Will See You Now": How Do ChatGPT-4 Treatment Recommendations Align With Orthopaedic Clinical Practice Guidelines?
Clin Orthop Relat Res. 2024 Dec 1;482(12):2098-2106. doi: 10.1097/CORR.0000000000003234. Epub 2024 Sep 6.
8
ChatGPT and Lacrimal Drainage Disorders: Performance and Scope of Improvement.
Ophthalmic Plast Reconstr Surg. 2023;39(3):221-225. doi: 10.1097/IOP.0000000000002418. Epub 2023 May 10.
9
Positive pressure therapy for Ménière's disease or syndrome.
Cochrane Database Syst Rev. 2015 Mar 10;2015(3):CD008419. doi: 10.1002/14651858.CD008419.pub2.

本文引用的文献

2
Evaluating ChatGPT's Performance in Answering Questions About Allergic Rhinitis and Chronic Rhinosinusitis.
Otolaryngol Head Neck Surg. 2024 Aug;171(2):571-577. doi: 10.1002/ohn.832. Epub 2024 May 26.
3
The Use of Artificial Intelligence to Improve Readability of Otolaryngology Patient Education Materials.
Otolaryngol Head Neck Surg. 2024 Aug;171(2):603-608. doi: 10.1002/ohn.816. Epub 2024 May 15.
5
Does ChatGPT Answer Otolaryngology Questions Accurately?
Laryngoscope. 2024 Sep;134(9):4011-4015. doi: 10.1002/lary.31410. Epub 2024 Mar 28.
6
Can ChatGPT help patients answer their otolaryngology questions?
Laryngoscope Investig Otolaryngol. 2023 Dec 9;9(1):e1193. doi: 10.1002/lio2.1193. eCollection 2024 Feb.
7
ChatGPT in healthcare: A taxonomy and systematic review.
Comput Methods Programs Biomed. 2024 Mar;245:108013. doi: 10.1016/j.cmpb.2024.108013. Epub 2024 Jan 15.
8
Evaluating the Current Ability of ChatGPT to Assist in Professional Otolaryngology Education.
OTO Open. 2023 Nov 22;7(4):e94. doi: 10.1002/oto2.94. eCollection 2023 Oct-Dec.
9
Can ChatGPT Guide Parents on Tympanostomy Tube Insertion?
Children (Basel). 2023 Sep 30;10(10):1634. doi: 10.3390/children10101634.
10
Head-to-Head Comparison of ChatGPT Versus Google Search for Medical Knowledge Acquisition.
Otolaryngol Head Neck Surg. 2024 Jun;170(6):1484-1491. doi: 10.1002/ohn.465. Epub 2023 Aug 2.

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验