• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

ChatGPT-3.5在波兰医学期末考试中的表现评估研究:回答980个问题的准确性

Assessment Study of ChatGPT-3.5's Performance on the Final Polish Medical Examination: Accuracy in Answering 980 Questions.

作者信息

Siebielec Julia, Ordak Michal, Oskroba Agata, Dworakowska Anna, Bujalska-Zadrozny Magdalena

机构信息

Department of Pharmacotherapy and Pharmaceutical Care, Faculty of Pharmacy, Medical University of Warsaw, 02-091 Warsaw, Poland.

出版信息

Healthcare (Basel). 2024 Aug 16;12(16):1637. doi: 10.3390/healthcare12161637.

DOI:10.3390/healthcare12161637
PMID:39201195
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC11353589/
Abstract

BACKGROUND/OBJECTIVES: The use of artificial intelligence (AI) in education is dynamically growing, and models such as ChatGPT show potential in enhancing medical education. In Poland, to obtain a medical diploma, candidates must pass the Medical Final Examination, which consists of 200 questions with one correct answer per question, is administered in Polish, and assesses students' comprehensive medical knowledge and readiness for clinical practice. The aim of this study was to determine how ChatGPT-3.5 handles questions included in this exam.

METHODS

This study considered 980 questions from five examination sessions of the Medical Final Examination conducted by the Medical Examination Center in the years 2022-2024. The analysis included the field of medicine, the difficulty index of the questions, and their type, namely theoretical versus case-study questions.

RESULTS

The average correct answer rate achieved by ChatGPT for the five examination sessions hovered around 60% and was lower ( < 0.001) than the average score achieved by the examinees. The lowest percentage of correct answers was in hematology (42.1%), while the highest was in endocrinology (78.6%). The difficulty index of the questions showed a statistically significant correlation with the correctness of the answers ( = 0.04). Questions for which ChatGPT-3.5 provided incorrect answers had a lower ( < 0.001) percentage of correct responses. The type of questions analyzed did not significantly affect the correctness of the answers ( = 0.46).

CONCLUSIONS

This study indicates that ChatGPT-3.5 can be an effective tool for assisting in passing the final medical exam, but the results should be interpreted cautiously. It is recommended to further verify the correctness of the answers using various AI tools.

摘要

背景/目的:人工智能(AI)在教育领域的应用正在蓬勃发展,ChatGPT等模型在提升医学教育方面展现出潜力。在波兰,要获得医学文凭,考生必须通过医学期末考试,该考试由200道题目组成,每题只有一个正确答案,考试语言为波兰语,旨在评估学生的综合医学知识以及临床实践准备情况。本研究的目的是确定ChatGPT-3.5如何应对该考试中的题目。

方法

本研究纳入了医学考试中心在2022年至2024年期间举行的五次医学期末考试中的980道题目。分析内容包括医学领域、题目的难度指数及其类型,即理论题与案例分析题。

结果

ChatGPT在这五次考试中的平均正确答案率徘徊在60%左右,低于考生的平均得分(<0.001)。正确答案比例最低的是血液学(42.1%),最高的是内分泌学(78.6%)。题目的难度指数与答案的正确性存在统计学显著相关性(=0.04)。ChatGPT-3.5给出错误答案的题目,其正确回答的比例更低(<0.001)。所分析的题目类型对答案的正确性没有显著影响(=0.46)。

结论

本研究表明ChatGPT-3.5可以成为辅助通过医学期末考试的有效工具,但对结果的解读应谨慎。建议使用各种人工智能工具进一步核实答案的正确性。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a83f/11353589/2de84a899f41/healthcare-12-01637-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a83f/11353589/6c4786630029/healthcare-12-01637-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a83f/11353589/357e3c71bf37/healthcare-12-01637-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a83f/11353589/272d1a3e8f2f/healthcare-12-01637-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a83f/11353589/2de84a899f41/healthcare-12-01637-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a83f/11353589/6c4786630029/healthcare-12-01637-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a83f/11353589/357e3c71bf37/healthcare-12-01637-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a83f/11353589/272d1a3e8f2f/healthcare-12-01637-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a83f/11353589/2de84a899f41/healthcare-12-01637-g004.jpg

相似文献

1
Assessment Study of ChatGPT-3.5's Performance on the Final Polish Medical Examination: Accuracy in Answering 980 Questions.ChatGPT-3.5在波兰医学期末考试中的表现评估研究:回答980个问题的准确性
Healthcare (Basel). 2024 Aug 16;12(16):1637. doi: 10.3390/healthcare12161637.
2
ChatGPT-3.5 passes Poland's medical final examination-Is it possible for ChatGPT to become a doctor in Poland?ChatGPT-3.5通过了波兰的医学期末考试——ChatGPT有可能在波兰成为一名医生吗?
SAGE Open Med. 2024 Jun 17;12:20503121241257777. doi: 10.1177/20503121241257777. eCollection 2024.
3
Performance of ChatGPT on the Chinese Postgraduate Examination for Clinical Medicine: Survey Study.ChatGPT 在临床医学研究生入学考试中的表现:调查研究。
JMIR Med Educ. 2024 Feb 9;10:e48514. doi: 10.2196/48514.
4
Assessment of ChatGPT-3.5's Knowledge in Oncology: Comparative Study with ASCO-SEP Benchmarks.ChatGPT-3.5在肿瘤学领域知识的评估:与美国临床肿瘤学会-欧洲肿瘤内科学会基准的比较研究
JMIR AI. 2024 Jan 12;3:e50442. doi: 10.2196/50442.
5
Performance of ChatGPT on the Peruvian National Licensing Medical Examination: Cross-Sectional Study.ChatGPT在秘鲁国家医学执照考试中的表现:横断面研究
JMIR Med Educ. 2023 Sep 28;9:e48039. doi: 10.2196/48039.
6
Assessing question characteristic influences on ChatGPT's performance and response-explanation consistency: Insights from Taiwan's Nursing Licensing Exam.评估问题特征对 ChatGPT 表现和回应解释一致性的影响:来自台湾护理执照考试的见解。
Int J Nurs Stud. 2024 May;153:104717. doi: 10.1016/j.ijnurstu.2024.104717. Epub 2024 Feb 8.
7
ChatGPT-3.5 and ChatGPT-4 dermatological knowledge level based on the Specialty Certificate Examination in Dermatology.基于皮肤病学专业证书考试的 ChatGPT-3.5 和 ChatGPT-4 皮肤科知识水平。
Clin Exp Dermatol. 2024 Jun 25;49(7):686-691. doi: 10.1093/ced/llad255.
8
Performance of the Large Language Model ChatGPT on the National Nurse Examinations in Japan: Evaluation Study.大型语言模型ChatGPT在日本国家护士考试中的表现:评估研究
JMIR Nurs. 2023 Jun 27;6:e47305. doi: 10.2196/47305.
9
GPT-4o vs. Human Candidates: Performance Analysis in the Polish Final Dentistry Examination.GPT-4o与人类考生:波兰牙科最终考试中的表现分析
Cureus. 2024 Sep 6;16(9):e68813. doi: 10.7759/cureus.68813. eCollection 2024 Sep.
10
Assessing the Capability of ChatGPT in Answering First- and Second-Order Knowledge Questions on Microbiology as per Competency-Based Medical Education Curriculum.根据基于能力的医学教育课程评估ChatGPT回答微生物学一阶和二阶知识问题的能力。
Cureus. 2023 Mar 12;15(3):e36034. doi: 10.7759/cureus.36034. eCollection 2023 Mar.

引用本文的文献

1
Accuracy and Safety of ChatGPT-3.5 in Assessing Over-the-Counter Medication Use During Pregnancy: A Descriptive Comparative Study.ChatGPT-3.5评估孕期非处方药物使用的准确性和安全性:一项描述性比较研究。
Pharmacy (Basel). 2025 Jul 30;13(4):104. doi: 10.3390/pharmacy13040104.
2
Comparative analysis of ChatGPT 3.5 and ChatGPT 4 obstetric and gynecological knowledge.ChatGPT 3.5与ChatGPT 4妇产科知识的对比分析
Sci Rep. 2025 Jul 1;15(1):21133. doi: 10.1038/s41598-025-08424-1.
3
ChatGPT Answers the 110-Question Laboratory Enzymology Student Exam: Pass or Fail?

本文引用的文献

1
The potential of ChatGPT in medicine: an example analysis of nephrology specialty exams in Poland.ChatGPT在医学领域的潜力:以波兰肾脏病专业考试为例的分析
Clin Kidney J. 2024 Jun 22;17(8):sfae193. doi: 10.1093/ckj/sfae193. eCollection 2024 Aug.
2
In-depth analysis of ChatGPT's performance based on specific signaling words and phrases in the question stem of 2377 USMLE step 1 style questions.基于 2377 个美国医师执照考试(USMLE)第 1 步风格问题题干中的特定信号词和短语,深入分析 ChatGPT 的表现。
Sci Rep. 2024 Jun 12;14(1):13553. doi: 10.1038/s41598-024-63997-7.
3
Evaluation of ChatGPT as a Tool for Answering Clinical Questions in Pharmacy Practice.
ChatGPT 回答110道实验室酶学学生考试题目:及格还是不及格?
Cureus. 2025 Apr 13;17(4):e82168. doi: 10.7759/cureus.82168. eCollection 2025 Apr.
4
Human versus Artificial Intelligence: ChatGPT-4 Outperforming Bing, Bard, ChatGPT-3.5 and Humans in Clinical Chemistry Multiple-Choice Questions.人类与人工智能:ChatGPT-4在临床化学选择题方面表现优于必应、巴德、ChatGPT-3.5和人类。
Adv Med Educ Pract. 2024 Sep 20;15:857-871. doi: 10.2147/AMEP.S479801. eCollection 2024.
5
The performance of OpenAI ChatGPT-4 and Google Gemini in virology multiple-choice questions: a comparative analysis of English and Arabic responses.OpenAI ChatGPT-4 和 Google Gemini 在病毒学选择题中的表现:英语和阿拉伯语回答的比较分析。
BMC Res Notes. 2024 Sep 3;17(1):247. doi: 10.1186/s13104-024-06920-7.
评估 ChatGPT 在药学实践中回答临床问题的工具。
J Pharm Pract. 2024 Dec;37(6):1303-1310. doi: 10.1177/08971900241256731. Epub 2024 May 22.
4
Can AI pass the written European Board Examination in Neurological Surgery? - Ethical and practical issues.人工智能能否通过欧洲神经外科书面考试?——伦理与实际问题。
Brain Spine. 2024 Feb 13;4:102765. doi: 10.1016/j.bas.2024.102765. eCollection 2024.
5
Can ChatGPT-3.5 Pass a Medical Exam? A Systematic Review of ChatGPT's Performance in Academic Testing.ChatGPT-3.5能通过医学考试吗?对ChatGPT在学术测试中表现的系统评价。
J Med Educ Curric Dev. 2024 Mar 13;11:23821205241238641. doi: 10.1177/23821205241238641. eCollection 2024 Jan-Dec.
6
A scoping review of artificial intelligence in medical education: BEME Guide No. 84.人工智能在医学教育中的应用:BEME 指南第 84 号
Med Teach. 2024 Apr;46(4):446-470. doi: 10.1080/0142159X.2024.2314198. Epub 2024 Feb 29.
7
Performance of ChatGPT on Stage 1 of the Taiwanese medical licensing exam.ChatGPT在台湾医师执照考试第一阶段的表现。
Digit Health. 2024 Feb 16;10:20552076241233144. doi: 10.1177/20552076241233144. eCollection 2024 Jan-Dec.
8
Beyond human in neurosurgical exams: ChatGPT's success in the Turkish neurosurgical society proficiency board exams.神经外科考试中的超越人类:ChatGPT 在土耳其神经外科学会专业能力考试中的成功。
Comput Biol Med. 2024 Feb;169:107807. doi: 10.1016/j.compbiomed.2023.107807. Epub 2023 Dec 10.
9
Reshaping medical education: Performance of ChatGPT on a PES medical examination.重塑医学教育:ChatGPT 在 PES 医学考试中的表现。
Cardiol J. 2024;31(3):442-450. doi: 10.5603/cj.97517. Epub 2023 Oct 13.
10
Appraising the performance of ChatGPT in psychiatry using 100 clinical case vignettes.使用 100 个临床病例简述评估 ChatGPT 在精神病学中的表现。
Asian J Psychiatr. 2023 Nov;89:103770. doi: 10.1016/j.ajp.2023.103770. Epub 2023 Sep 20.