Cirkel Lasse, Lechner Fabian, Henk Lukas Alexander, Krusche Martin, Hirsch Martin C, Hertl Michael, Kuhn Sebastian, Knitza Johannes
Institute of Artificial Intelligence, University Hospital Gießen-Marburg, Philipps University, Marburg, Germany.
Institute for Digital Medicine, University Hospital Gießen-Marburg, Philipps University, Marburg, Germany.
Diagnosis (Berl). 2025 May 23. doi: 10.1515/dx-2025-0014.
Interpreting skin findings can be challenging for both laypersons and clinicians. Large language models (LLMs) offer accessible decision support, yet their diagnostic capabilities for dermatological images remain underexplored. This study evaluated the diagnostic performance of LLMs based on image interpretation of common dermatological diseases.
A total of 500 dermatological images, encompassing four prevalent skin conditions (psoriasis, vitiligo, erysipelas and rosacea), were used to compare seven multimodal LLMs (GPT-4o, GPT-4o mini, Gemini 1.5 Pro, Gemini 1.5 Flash, Claude 3.5 Sonnet, Llama3.2 90B and 11B). A standardized prompt was used to generate one top diagnosis.
The highest overall accuracy was achieved by GPT-4o (67.8 %), followed by GPT-4o mini (63.8 %) and Llama3.2 11B (61.4 %). Accuracy varied considerably across conditions, with psoriasis with the highest mean LLM accuracy of 59.2 % and erysipelas demonstrating the lowest accuracy (33.4 %). 11.0 % of all images were misdiagnosed by all LLMs, whereas 11.6 % were correctly diagnosed by all models. Correct diagnoses by all LLMs were linked to clear, disease-specific features, such as sharply demarcated erythematous plaques in psoriasis. Llama3.2 90B was the only LLM to decline diagnosing images, particularly those involving intimate areas of the body.
LLM performance varied significantly, emphasizing the need for cautious usage. Notably, a free, locally hostable model correctly identified the top diagnosis for approximately two-thirds of all images, demonstrating the potential for safer, locally deployed LLMs. Advancements in model accuracy and the integration of clinical metadata could further enhance accessible and reliable clinical decision support systems.
对于外行人及临床医生而言,解读皮肤检查结果都颇具挑战性。大语言模型(LLMs)提供了便捷的决策支持,但其对皮肤科图像的诊断能力仍未得到充分探索。本研究基于常见皮肤病的图像解读评估了大语言模型的诊断性能。
共使用500张皮肤科图像,涵盖四种常见皮肤病(银屑病、白癜风、丹毒和玫瑰痤疮),以比较七个多模态大语言模型(GPT-4o、GPT-4o mini、Gemini 1.5 Pro、Gemini 1.5 Flash、Claude 3.5 Sonnet、Llama3.2 90B和11B)。使用标准化提示生成一个首要诊断。
GPT-4o的总体准确率最高(67.8%),其次是GPT-4o mini(63.8%)和Llama3.2 11B(61.4%)。不同疾病的准确率差异很大,银屑病的大语言模型平均准确率最高,为59.2%,丹毒的准确率最低(33.4%)。所有图像中有11.0%被所有大语言模型误诊,而11.6%被所有模型正确诊断。所有大语言模型的正确诊断都与清晰的、特定疾病的特征相关,如银屑病中边界清晰的红斑斑块。Llama3.2 90B是唯一一个拒绝诊断图像的大语言模型,尤其是那些涉及身体私密部位的图像。
大语言模型的性能差异显著,这凸显了谨慎使用的必要性。值得注意的是,一个免费的、可在本地托管的模型正确识别了所有图像中约三分之二的首要诊断,显示了更安全的本地部署大语言模型的潜力。模型准确性的提高以及临床元数据的整合可进一步增强可及且可靠的临床决策支持系统。