Khan Maria Palwasha, O'Sullivan Eoin Daniel
Kidney Health Service, Metro North Hospital and Health Service, Brisbane, QLD, Australia.
Institute of Molecular Bioscience, University of Queensland, St Lucia, QLD, Australia.
Front Artif Intell. 2024 Aug 5;7:1379297. doi: 10.3389/frai.2024.1379297. eCollection 2024.
The rise of accessible, consumer facing large language models (LLM) provides an opportunity for immediate diagnostic support for clinicians.
To compare the different performance characteristics of common LLMS utility in solving complex clinical cases and assess the utility of a novel tool to grade LLM output.
Using a newly developed rubric to assess the models' diagnostic utility, we measured to models' ability to answer cases according to accuracy, readability, clinical interpretability, and an assessment of safety. Here we present a comparative analysis of three LLM models-Bing, Chat GPT, and Gemini-across a diverse set of clinical cases as presented in the New England Journal of Medicines case series.
Our results suggest that models performed differently when presented with identical clinical information, with Gemini performing best. Our grading tool had low interobserver variability and proved a reliable tool to grade LLM clinical output.
This research underscores the variation in model performance in clinical scenarios and highlights the importance of considering diagnostic model performance in diverse clinical scenarios prior to deployment. Furthermore, we provide a new tool to assess LLM output.
面向消费者的易用大语言模型(LLM)的兴起为临床医生提供了即时诊断支持的机会。
比较常见大语言模型在解决复杂临床病例中的不同性能特征,并评估一种用于对大语言模型输出进行评分的新型工具的效用。
我们使用新开发的评分标准来评估模型的诊断效用,测量模型根据准确性、可读性、临床可解释性和安全性评估来回答病例的能力。在此,我们对《新英格兰医学杂志》病例系列中呈现的各种临床病例中的三种大语言模型——必应、ChatGPT和Gemini进行了比较分析。
我们的结果表明,当呈现相同的临床信息时,模型的表现有所不同,Gemini表现最佳。我们的评分工具观察者间变异性较低,证明是一种对大语言模型临床输出进行评分的可靠工具。
本研究强调了临床场景中模型性能的差异,并突出了在部署前考虑不同临床场景中诊断模型性能的重要性。此外,我们提供了一种评估大语言模型输出的新工具。