Wan Zhiyu, Guo Yuhang, Bao Shunxing, Wang Qian, Malin Bradley A
Department of Biomedical Informatics, Vanderbilt University Medical Center, Nashville, TN, USA.
School of Biomedical Engineering, ShanghaiTech University, Shanghai, China.
Health Data Sci. 2025 Apr 1;5:0256. doi: 10.34133/hds.0256. eCollection 2025.
Multimodal large language models (LLMs) have shown potential in various health-related fields. However, many healthcare studies have raised concerns about the reliability and biases of LLMs in healthcare applications. To explore the practical application of multimodal LLMs in skin disease identification, and to evaluate sex and age biases, we tested the performance of 2 popular multimodal LLMs, ChatGPT-4 and LLaVA-1.6, across diverse sex and age groups using a subset of a large dermatoscopic dataset containing around 10,000 images and 3 skin diseases (melanoma, melanocytic nevi, and benign keratosis-like lesions). In comparison to 3 deep learning models (VGG16, ResNet50, and Model Derm) based on convolutional neural network (CNN), one vision transformer model (Swin-B), we found that ChatGPT-4 and LLaVA-1.6 demonstrated overall accuracies that were 3% and 23% higher (and F1-scores that were 4% and 34% higher), respectively, than the best performing CNN-based baseline while maintaining accuracies that were 38% and 26% lower (and F1-scores that were 38% and 19% lower), respectively, than Swin-B. Meanwhile, ChatGPT-4 is generally unbiased in identifying these skin diseases across sex and age groups, while LLaVA-1.6 is generally unbiased across age groups, in contrast to Swin-B, which is biased in identifying melanocytic nevi. This study suggests the usefulness and fairness of LLMs in dermatological applications, aiding physicians and practitioners with diagnostic recommendations and patient screening. To further verify and evaluate the reliability and fairness of LLMs in healthcare, experiments using larger and more diverse datasets need to be performed in the future.
多模态大语言模型(LLMs)已在各种与健康相关的领域展现出潜力。然而,许多医疗保健研究对LLMs在医疗应用中的可靠性和偏差提出了担忧。为了探索多模态LLMs在皮肤病识别中的实际应用,并评估性别和年龄偏差,我们使用一个包含约10000张图像和3种皮肤病(黑色素瘤、黑素细胞痣和良性角化病样病变)的大型皮肤镜数据集的子集,测试了2种流行的多模态LLMs,ChatGPT-4和LLaVA-1.6,在不同性别和年龄组中的性能。与基于卷积神经网络(CNN)的3种深度学习模型(VGG16、ResNet50和Model Derm)以及一种视觉Transformer模型(Swin-B)相比,我们发现ChatGPT-4和LLaVA-1.6的总体准确率分别比表现最佳的基于CNN的基线高3%和23%(F1分数分别高4%和34%),同时与Swin-B相比,准确率分别低38%和26%(F1分数分别低38%和19%)。与此同时,ChatGPT-4在跨性别和年龄组识别这些皮肤病时通常无偏差,而LLaVA-1.6在跨年龄组时通常无偏差,相比之下,Swin-B在识别黑素细胞痣时有偏差。这项研究表明LLMs在皮肤病学应用中的有用性和公平性,有助于医生和从业者进行诊断建议和患者筛查。为了进一步验证和评估LLMs在医疗保健中的可靠性和公平性,未来需要使用更大、更多样化的数据集进行实验。