Suppr超能文献

ChatGPT 与临床医生:人工智能在皮肤病学诊断能力方面的挑战。

ChatGPT versus clinician: challenging the diagnostic capabilities of artificial intelligence in dermatology.

机构信息

Department of Dermatology, Royal South Hants Hospital, University Hospitals Southampton, Southampton, UK.

Department of Dermatology, St Mary's Hospital, Portsmouth Hospitals University NHS Trust, Portsmouth, UK.

出版信息

Clin Exp Dermatol. 2024 Jun 25;49(7):707-710. doi: 10.1093/ced/llad402.

Abstract

BACKGROUND

ChatGPT is an online language-based platform designed to answer questions in a human-like way, using deep learning -technology.

OBJECTIVES

To examine the diagnostic capabilities of ChatGPT using real-world anonymized medical dermatology cases.

METHODS

Clinical information from 90 consecutive patients referred to a single dermatology emergency clinic between June and December 2022 were examined. Thirty-six patients were included. Anonymized clinical information was transcribed and input into ChatGPT 4.0 followed by the question 'What is the most likely diagnosis?' The suggested diagnosis made by ChatGPT was then compared with the diagnosis made by dermatology.

RESULTS

After inputting clinical history and examination data obtained by a dermatologist, ChatGPT made a correct primary diagnosis 56% of the time (n = 20). Using the clinical history and cutaneous signs recorded by nonspecialists, it was able to make a correct diagnosis 39% of the time (n = 14). This was similar to the diagnostic rate of nonspecialists (36%; n = 13), but it was much lower than that of dermatologists (83%; n = 30). There was no differential offered by referring sources 28% of the time (n = 10), unlike ChatGPT, which provided a differential diagnosis 100% of the time. Qualitative analysis showed that ChatGPT offered responses with caution, often justifying its reasoning.

CONCLUSIONS

This study illustrates that while ChatGPT has a diagnostic capability, in its current form it does not significantly improve the diagnostic yield in primary or secondary care.

摘要

背景

ChatGPT 是一个基于语言的在线平台,旨在使用深度学习技术以类似人类的方式回答问题。

目的

使用真实世界的匿名医学皮肤科病例来检验 ChatGPT 的诊断能力。

方法

检查了 2022 年 6 月至 12 月期间在一家皮肤科急诊诊所连续就诊的 90 例连续患者的临床信息。纳入 36 例患者。将匿名的临床信息转录并输入 ChatGPT 4.0,然后提问“最可能的诊断是什么?”ChatGPT 提出的建议诊断随后与皮肤科医生的诊断进行比较。

结果

在输入皮肤科医生获得的临床病史和检查数据后,ChatGPT 正确诊断的次数为 56%(n=20)。使用非专业人员记录的临床病史和皮肤迹象,它能够正确诊断的次数为 39%(n=14)。这与非专业人员的诊断率(36%;n=13)相似,但远低于皮肤科医生的诊断率(83%;n=30)。有 28%的时间没有提供转诊来源的差异(n=10),而 ChatGPT 则 100%的时间提供鉴别诊断。定性分析表明,ChatGPT 谨慎地提供了回复,经常为其推理提供依据。

结论

本研究表明,虽然 ChatGPT 具有诊断能力,但在其当前形式下,它并没有显著提高初级或二级保健中的诊断率。

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验