通过光学相干断层扫描(OCT)进行视网膜疾病诊断的多模态语言模型:少样本学习与单样本学习

Multimodal LLMs for retinal disease diagnosis via OCT: few-shot versus single-shot learning.

作者信息

Agbareia Reem, Omar Mahmud, Zloto Ofira, Glicksberg Benjamin S, Nadkarni Girish N, Klang Eyal

机构信息

Ophthalmology Department, Hadassah Medical Center, Jerusalem, Israel.

Faculty of Medicine, Hebrew University of Jerusalem, Jerusalem, Israel.

出版信息

Ther Adv Ophthalmol. 2025 May 20;17:25158414251340569. doi: 10.1177/25158414251340569. eCollection 2025 Jan-Dec.

Abstract

BACKGROUND AND AIM

Multimodal large language models (LLMs) have shown potential in processing both text and image data for clinical applications. This study evaluated their diagnostic performance in identifying retinal diseases from optical coherence tomography (OCT) images.

METHODS

We assessed the diagnostic accuracy of GPT-4o and Claude Sonnet 3.5 using two public OCT datasets (OCTID, OCTDL) containing expert-labeled images of four pathological conditions and normal retinas. Both models were tested using single-shot and few-shot prompts, with an overall of 3088 models' API calls. Statistical analyses were performed to evaluate differences in overall and condition-specific performance.

RESULTS

GPT-4o's accuracy improved from 56.29% with single-shot prompts to 73.08% with few-shot prompts ( < 0.001). Similarly, Claude Sonnet 3.5 increased from 40.03% to 70.98% using the same approach ( < 0.001). Condition-specific analyses revealed similar trends, with absolute improvements ranging from 2% to 64%. These findings were consistent across the validation dataset.

CONCLUSION

Few-shot prompted multimodal LLMs show promise for clinical integration, particularly in identifying normal retinas, which could help streamline referral processes in primary care. While these models fall short of the diagnostic accuracy reported in established deep learning literature, they offer simple, effective tools for assisting in routine retinal disease diagnosis. Future research should focus on further validation and integrating clinical text data with imaging.

摘要

背景与目的

多模态大语言模型(LLMs)在处理文本和图像数据以用于临床应用方面已显示出潜力。本研究评估了它们从光学相干断层扫描(OCT)图像中识别视网膜疾病的诊断性能。

方法

我们使用两个公共OCT数据集(OCTID、OCTDL)评估了GPT - 4o和Claude Sonnet 3.5的诊断准确性,这两个数据集包含四种病理状况和正常视网膜的专家标注图像。两个模型均使用单样本和少样本提示进行测试,总共进行了3088次模型的API调用。进行统计分析以评估总体和特定病症性能的差异。

结果

GPT - 4o的准确率从单样本提示时的56.29%提高到少样本提示时的73.08%(<0.001)。同样,Claude Sonnet 3.5使用相同方法从40.03%提高到70.98%(<0.001)。特定病症分析显示出类似趋势,绝对改善幅度在2%至64%之间。这些发现在验证数据集中是一致的。

结论

少样本提示的多模态大语言模型显示出临床整合的前景,特别是在识别正常视网膜方面,这有助于简化初级保健中的转诊流程。虽然这些模型未达到已发表的深度学习文献中报告的诊断准确性,但它们提供了简单有效的工具来辅助常规视网膜疾病诊断。未来的研究应侧重于进一步验证以及将临床文本数据与成像数据整合。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6fcd/12093016/88761aae63dd/10.1177_25158414251340569-fig1.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索