Hemmer Paul A, Dong Ting, Durning Steven J, Pangaro Louis N
Department of Medicine, Uniformed Services University of the Health Sciences, 4301 Jones Bridge Road, Bethesda, MD 20814.
Mil Med. 2015 Apr;180(4 Suppl):79-87. doi: 10.7205/MILMED-D-14-00576.
Medical students learn clinical reasoning, in part, through patient care. Although the numbers of patients seen is associated with knowledge examination scores, studies have not demonstrated an association between patient problems and an assessment of clinical reasoning.
To examine the reliability of a clinical reasoning examination and investigate whether there was association between internal medicine core clerkship students' performance on this examination and the number of patients they saw with matching problems during their internal medicine clerkship.
Students on the core internal medicine clerkship at the Uniformed Services University students log 11 core patient problems based on the Clerkship Directors in Internal Medicine curriculum. On a final clerkship examination (Multistep), students watch a scripted video encounter between physician and patient actors that assesses three sequential steps in clinical reasoning: Step One focuses on history and physical examination; Step Two, students write a problem list after viewing additional clinical findings; Step Three, students complete a prioritized differential diagnosis and treatment plan. Each Multistep examination has three different cases. For graduating classes 2010-2012 (n = 497), we matched the number of patients seen with the problem most represented by the Multistep cases (epigastric pain, generalized edema, monoarticular arthritis, angina, syncope, pleuritic chest pain). We report two-way Pearson correlations between the number of patients students reported with similar problems and the student's percent score on: Step One, Step Two, Step Three, and Overall Test.
Multistep reliability: Step 1, 0.6 to 0.8; Step 2, 0.41 to 0.65; Step 3, 0.53 to 0.78; Overall examination (3 cases): 0.74 to 0.83. For three problems, the number of patients seen had small to modest correlations with the Multistep Examination of Analytic Ability total score (r = 0.27 for pleuritic pain, p < 0.05, n = 81 patients; r = 0.14 for epigastric pain, p < 0.05, n = 324 patients; r = 0.19 for generalized edema, p < 0.05, n = 118 patients). DISCUSSION or
Although a reliable assessment, student performance on a clinical reasoning examination was weakly associated with the numbers of patients seen with similar problems. This may be as a result of transfer of knowledge between clinical and examination settings, the complexity of clinical reasoning, or the limits of reliability with patient logs and the Multistep.
医学生部分通过患者护理来学习临床推理。虽然所诊治患者的数量与知识考试成绩相关,但研究尚未证明患者问题与临床推理评估之间存在关联。
检验临床推理考试的可靠性,并调查内科实习学生在该考试中的表现与他们在内科实习期间遇到的具有匹配问题的患者数量之间是否存在关联。
美国军医大学内科核心实习的学生根据内科实习主任课程记录11个核心患者问题。在最终的实习考试(多步骤考试)中,学生观看医生与患者演员之间的脚本视频会诊,该会诊评估临床推理的三个连续步骤:第一步侧重于病史和体格检查;第二步,学生在查看其他临床发现后写出问题清单;第三步,学生完成优先鉴别诊断和治疗计划。每次多步骤考试有三个不同的病例。对于2010 - 2012届毕业班(n = 497),我们将所诊治患者的数量与多步骤病例中最具代表性的问题(上腹部疼痛、全身性水肿、单关节关节炎、心绞痛、晕厥、胸膜炎性胸痛)进行匹配。我们报告学生报告的有类似问题的患者数量与学生在以下方面的百分比得分之间的双向皮尔逊相关性:第一步、第二步、第三步和总体考试。
多步骤考试的可靠性:第一步,0.6至0.8;第二步,0.41至0.65;第三步,0.53至0.78;总体考试(3个病例):0.74至0.83。对于三个问题,所诊治患者的数量与分析能力多步骤考试总分有小到中等程度的相关性(胸膜炎性胸痛r = 0.27,p < 0.05,n = 81例患者;上腹部疼痛r = 0.14,p < 0.05,n = 324例患者;全身性水肿r = 0.19,p < 0.05,n = 118例患者)。
虽然是可靠的评估,但学生在临床推理考试中的表现与遇到类似问题的患者数量之间的关联较弱。这可能是由于临床与考试环境之间的知识转移、临床推理的复杂性,或者患者日志和多步骤考试的可靠性限制所致。