Suppr超能文献

大语言模型在具有挑战性的临床病例中的诊断能力比较。

A comparison of the diagnostic ability of large language models in challenging clinical cases.

作者信息

Khan Maria Palwasha, O'Sullivan Eoin Daniel

机构信息

Kidney Health Service, Metro North Hospital and Health Service, Brisbane, QLD, Australia.

Institute of Molecular Bioscience, University of Queensland, St Lucia, QLD, Australia.

出版信息

Front Artif Intell. 2024 Aug 5;7:1379297. doi: 10.3389/frai.2024.1379297. eCollection 2024.

Abstract

INTRODUCTION

The rise of accessible, consumer facing large language models (LLM) provides an opportunity for immediate diagnostic support for clinicians.

OBJECTIVES

To compare the different performance characteristics of common LLMS utility in solving complex clinical cases and assess the utility of a novel tool to grade LLM output.

METHODS

Using a newly developed rubric to assess the models' diagnostic utility, we measured to models' ability to answer cases according to accuracy, readability, clinical interpretability, and an assessment of safety. Here we present a comparative analysis of three LLM models-Bing, Chat GPT, and Gemini-across a diverse set of clinical cases as presented in the New England Journal of Medicines case series.

RESULTS

Our results suggest that models performed differently when presented with identical clinical information, with Gemini performing best. Our grading tool had low interobserver variability and proved a reliable tool to grade LLM clinical output.

CONCLUSION

This research underscores the variation in model performance in clinical scenarios and highlights the importance of considering diagnostic model performance in diverse clinical scenarios prior to deployment. Furthermore, we provide a new tool to assess LLM output.

摘要

引言

面向消费者的易用大语言模型(LLM)的兴起为临床医生提供了即时诊断支持的机会。

目的

比较常见大语言模型在解决复杂临床病例中的不同性能特征,并评估一种用于对大语言模型输出进行评分的新型工具的效用。

方法

我们使用新开发的评分标准来评估模型的诊断效用,测量模型根据准确性、可读性、临床可解释性和安全性评估来回答病例的能力。在此,我们对《新英格兰医学杂志》病例系列中呈现的各种临床病例中的三种大语言模型——必应、ChatGPT和Gemini进行了比较分析。

结果

我们的结果表明,当呈现相同的临床信息时,模型的表现有所不同,Gemini表现最佳。我们的评分工具观察者间变异性较低,证明是一种对大语言模型临床输出进行评分的可靠工具。

结论

本研究强调了临床场景中模型性能的差异,并突出了在部署前考虑不同临床场景中诊断模型性能的重要性。此外,我们提供了一种评估大语言模型输出的新工具。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/88ef/11330891/751265600969/frai-07-1379297-g001.jpg

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验