Suppr超能文献

ChatGPT 在临床药学事实性知识问题上的表现。

Performance of ChatGPT on Factual Knowledge Questions Regarding Clinical Pharmacy.

机构信息

Department of Clinical Pharmacy, Tergooi Medical Center, Hilversum, The Netherlands.

Department of Clinical Pharmacy, Amphia Hospital, Breda, The Netherlands.

出版信息

J Clin Pharmacol. 2024 Sep;64(9):1095-1100. doi: 10.1002/jcph.2443. Epub 2024 Apr 16.

Abstract

ChatGPT is a language model that was trained on a large dataset including medical literature. Several studies have described the performance of ChatGPT on medical exams. In this study, we examine its performance in answering factual knowledge questions regarding clinical pharmacy. Questions were obtained from a Dutch application that features multiple-choice questions to maintain a basic knowledge level for clinical pharmacists. In total, 264 clinical pharmacy-related questions were presented to ChatGPT and responses were evaluated for accuracy, concordance, quality of the substantiation, and reproducibility. Accuracy was defined as the correctness of the answer, and results were compared to the overall score by pharmacists over 2022. Responses were marked concordant if no contradictions were present. The quality of the substantiation was graded by two independent pharmacists using a 4-point scale. Reproducibility was established by presenting questions multiple times and on various days. ChatGPT yielded accurate responses for 79% of the questions, surpassing pharmacists' accuracy of 66%. Concordance was 95%, and the quality of the substantiation was deemed good or excellent for 73% of the questions. Reproducibility was consistently high, both within day and between days (>92%), as well as across different users. ChatGPT demonstrated a higher accuracy and reproducibility to factual knowledge questions related to clinical pharmacy practice than pharmacists. Consequently, we posit that ChatGPT could serve as a valuable resource to pharmacists. We hope the technology will further improve, which may lead to enhanced future performance.

摘要

ChatGPT 是一种基于包括医学文献在内的大型数据集进行训练的语言模型。有几项研究描述了 ChatGPT 在医学考试中的表现。在这项研究中,我们考察了它在回答临床药学方面的事实性知识问题方面的表现。这些问题是从一个荷兰应用程序中获得的,该应用程序具有多项选择题,以维持临床药师的基本知识水平。总共有 264 个与临床药学相关的问题被呈现给 ChatGPT,并对其准确性、一致性、论证质量和可重复性进行了评估。准确性被定义为答案的正确性,并将结果与 2022 年药剂师的总分进行了比较。如果没有矛盾,那么答案就被标记为一致。论证质量由两名独立药剂师使用 4 分制进行评分。通过多次在不同的日子呈现问题来建立可重复性。ChatGPT 对 79%的问题给出了准确的回答,超过了药剂师 66%的准确性。一致性为 95%,对于 73%的问题,论证质量被认为是好的或优秀的。可重复性无论是在同一天内还是在不同的日子内都很高(>92%),并且在不同的用户中也是如此。ChatGPT 在回答与临床药学实践相关的事实性知识问题方面的准确性和可重复性均高于药剂师。因此,我们假设 ChatGPT 可以成为药剂师的有价值的资源。我们希望这项技术能够进一步改进,从而可能带来更好的未来表现。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验