Kumar Bharat, Ferguson Kristi, Swee Melissa, Suneja Manish
Rheumatology, University of Iowa Hospitals & Clinics, Iowa City, USA.
Medical Education, University of Iowa Carver College of Medicine, Iowa City, USA.
Cureus. 2021 Nov 18;13(11):e19722. doi: 10.7759/cureus.19722. eCollection 2021 Nov.
Objectives Expert clinicians (ECs) are defined in large part as a group of physicians recognized by their peers for their diagnostic reasoning abilities. However, their reasoning skills have not been quantitatively compared to other clinicians using a validated instrument. Methods We surveyed Internal Medicine physicians at the University of Iowa to identify ECs. These clinicians were administered the Diagnostic Thinking Inventory, along with an equivalent number of their peers in the general population of internists. Scores were tabulated for structure and thinking, as well as four previously identified elements of diagnostic reasoning (data acquisition, problem representation, hypothesis generation, and illness script search and selection). We compared scores between the two groups using the two-sample t-test. Results Seventeen ECs completed the inventory (100%). Out of 25 randomly-selected non-EC internists (IM), 19 completed the inventory (76%). Mean total scores were 187.2 and 175.8 for the EC and the IM groups respectively. Thinking and structure subscores were 91.5 and 95.71 for ECs, compared to 85.5 and 90.3 for IMs (p-values: 0.0783 and 0.1199, respectively). The mean data acquisition, problem representation, hypothesis generation, and illness script selection subscores for ECs were 4.46, 4.57, 4.71, and 4.46, compared to 4.13, 4.38, 4.45, and 4.13 in the IM group (p-values: 0.2077, 0.4528, 0.095, and 0.029, respectively). Conclusions ECs have greater proficiency in searching for and selecting illness scripts compared to their peers. There were no statistically significant differences between the other scores and subscores. These results will help to inform continuing medical education efforts to improve diagnostic reasoning.
目标 专家临床医生(ECs)在很大程度上被定义为一群因其诊断推理能力而受到同行认可的医生。然而,他们的推理技能尚未使用经过验证的工具与其他临床医生进行定量比较。方法 我们对爱荷华大学的内科医生进行了调查,以确定专家临床医生。这些临床医生接受了诊断思维量表测试,同时在内科医生总体中选取了同等数量的同行进行测试。对结构和思维得分以及先前确定的诊断推理的四个要素(数据采集、问题表征、假设生成以及疾病脚本搜索与选择)进行了制表。我们使用两样本t检验比较了两组的得分。结果 17名专家临床医生完成了量表测试(100%)。在25名随机选择的非专家内科医生(IM)中,19名完成了量表测试(76%)。专家临床医生组和内科医生组的平均总分分别为187.2和175.8。专家临床医生的思维和结构子得分分别为91.5和95.71,而内科医生为85.5和90.3(p值分别为0.0783和0.1199)。专家临床医生在数据采集、问题表征、假设生成和疾病脚本选择方面的平均子得分分别为4.46、4.57、4.71和4.46,而内科医生组分别为4.13、4.38、4.45和4.13(p值分别为0.2077、0.4528、0.095和0.029)。结论 与同行相比,专家临床医生在搜索和选择疾病脚本方面具有更高的熟练度。其他得分和子得分之间没有统计学上的显著差异。这些结果将有助于为改进诊断推理的继续医学教育工作提供信息。