Sezgin Ali, Tanık Veysel Ozan, Akdoğan Murat, Şahin Yusuf Bozkurt, Akbuğa Kürşat, Hekimsoy Vedat, Tunca Çağatay, Saraçoğlu Erhan, Özlek Bülent
Department of Cardiology, Ankara Etlik City Hospital, Ankara, Türkiye.
Department of Cardiology, Muğla Sıtkı Koçman University, School of Medicine, Muğla, Türkiye.
Turk Kardiyol Dern Ars. 2025 Sep 8. doi: 10.5543/tkda.2025.54968.
Management of aortic stenosis (AS) requires integrating complex clinical, imaging, and risk stratification data. Large language models (LLMs) such as ChatGPT and Gemini AI have shown promise in healthcare, but their performance in valvular heart disease, particularly AS, has not been thoroughly assessed. This study systematically compared ChatGPT and Gemini AI in addressing guideline-based and clinical scenario questions related to AS.
Forty open-ended AS-related questions were developed, comprising 20 knowledge-based and 20 clinical scenario items based on the 2021 European Society of Cardiology/European Association for Cardio-Thoracic Surgery (ESC/EACTS) guidelines. Both models were queried independently. Responses were evaluated by two blinded cardiologists using a structured 4-point scoring system. Composite scores were categorized, and comparisons were performed using Wilcoxon signed-rank and chi-square tests.
Gemini AI achieved a significantly higher mean overall score than ChatGPT (3.96 +- 0.17 vs. 3.56 +- 0.87; P = 0.003). Fully guideline-compliant responses were more frequent with Gemini AI (95.0%) than with ChatGPT (72.5%), although the overall compliance distribution difference did not reach conventional significance (P = 0.067). Gemini AI performed more consistently across both question types. Inter-rater agreement was excellent for ChatGPT (κ = 0.94) and moderate for Gemini AI (κ = 0.66).
Gemini AI demonstrated superior accuracy, consistency, and guideline adherence compared to ChatGPT. While LLMs show potential as adjunctive tools in cardiovascular care, expert oversight remains essential, and further model refinement is needed before clinical integration, particularly in AS management.
主动脉瓣狭窄(AS)的管理需要整合复杂的临床、影像学和风险分层数据。ChatGPT和Gemini AI等大型语言模型在医疗保健领域已显示出应用前景,但其在瓣膜性心脏病,尤其是AS方面的表现尚未得到全面评估。本研究系统地比较了ChatGPT和Gemini AI在解决与AS相关的基于指南和临床情景问题方面的能力。
根据2021年欧洲心脏病学会/欧洲心胸外科学会(ESC/EACTS)指南,编制了40个与AS相关的开放式问题,包括20个基于知识的问题和20个临床情景问题。分别对两个模型进行查询。由两名不知情的心脏病专家使用结构化的4分评分系统对回答进行评估。对综合得分进行分类,并使用Wilcoxon符号秩检验和卡方检验进行比较。
Gemini AI的平均总体得分显著高于ChatGPT(3.96±0.17对3.56±0.87;P = 0.003)。Gemini AI给出的完全符合指南的回答比ChatGPT更频繁(95.0%对72.5%),尽管总体符合率分布差异未达到传统显著性水平(P = 0.067)。Gemini AI在两种问题类型上的表现更一致。ChatGPT的评分者间一致性极佳(κ = 0.94),Gemini AI的评分者间一致性中等(κ = 0.66)。
与ChatGPT相比,Gemini AI在准确性、一致性和指南遵循性方面表现更优。虽然大型语言模型在心血管护理中显示出作为辅助工具的潜力,但专家监督仍然至关重要,在临床应用之前,特别是在AS管理方面,还需要进一步完善模型。