Moorfields Eye Hospital NHS Foundation Trust, City Road London, London, UK.
Department of Ophthalmology, Inselspital University Hospital of Bern, Bern, Switzerland.
Eye (Lond). 2024 Nov;38(16):3113-3117. doi: 10.1038/s41433-024-03231-w. Epub 2024 Jul 13.
This study aimed to evaluate the accuracy of information that patients can obtain from large language models (LLMs) when seeking answers to common questions about choroidal melanoma.
Comparative study comparing frequently asked questions from choroidal melanoma patients and queried three major LLMs-ChatGPT 3.5, Bing AI, and DocsGPT. Answers were reviewed by three ocular oncology experts and scored as accurate, partially accurate, or inaccurate. Statistical analysis compared the quality of responses across models.
For medical advice questions, ChatGPT gave 92% accurate responses compared to 58% for Bing AI and DocsGPT. For pre/post-op questions, ChatGPT and Bing AI were 86% accurate while DocsGPT was 73% accurate. There were no statistically significant differences between models. ChatGPT responses were the longest while Bing AI responses were the shortest, but length did not affect accuracy. All LLMs appropriately directed patients to seek medical advice from professionals.
LLMs show promising capability to address common choroidal melanoma patient questions at generally acceptable accuracy levels. However, inconsistent, and inaccurate responses do occur, highlighting the need for improved fine-tuning and oversight before integration into clinical practice.
本研究旨在评估患者在寻求有关脉络膜黑色素瘤常见问题的答案时,从大型语言模型(LLM)获得的信息的准确性。
这是一项比较研究,比较了脉络膜黑色素瘤患者的常见问题和三个主要的 LLM(ChatGPT 3.5、Bing AI 和 DocsGPT)查询的问题。答案由三位眼肿瘤科专家进行审查,并评为准确、部分准确或不准确。对跨模型的响应质量进行了统计分析。
在医疗建议问题上,ChatGPT 的准确回答率为 92%,而 Bing AI 和 DocsGPT 为 58%。在术前/术后问题上,ChatGPT 和 Bing AI 的准确率为 86%,而 DocsGPT 为 73%。模型之间没有统计学上的显著差异。ChatGPT 的回答最长,而 Bing AI 的回答最短,但长度并不影响准确性。所有的 LLM 都适当地指导患者向专业人士寻求医疗建议。
LLM 显示出有希望的能力,可以在普遍可接受的准确性水平上解决常见的脉络膜黑色素瘤患者问题。然而,确实会出现不一致和不准确的回答,这突出表明在将其整合到临床实践之前,需要进行改进的微调和监督。