Seagull F Jacob, Bailey Janet E, Trout Andrew, Cohan Richard H, Lypson Monica L
Department of Medical Education, University of Michigan Medical School, 221 Victor Vaughan Building, 1111 E. Catherine SPC-2054, Ann Arbor, MI 48109-2054.
Department of Radiology, University of Michigan Medical School, Ann Arbor, MI.
Acad Radiol. 2014 Jul;21(7):909-15. doi: 10.1016/j.acra.2014.03.010.
Despite increasing radiology coverage, nonradiology residents continue to preliminarily interpret basic radiologic studies independently, yet their ability to do so accurately is not routinely assessed.
An online test of basic radiologic image interpretation was developed through an iterative process. Educational objectives were established, then questions and images were gathered to create an assessment. The test was administered online to first-year interns (postgraduate year [PGY] 1) from 14 different specialties, as well as a sample of third- and fourth-year radiology residents (PGY3/R2 and PGY4/R3).
Over a 2-year period, 368 residents were assessed, including PGY1 (n = 349), PGY3/R2 (n = 14), and PGY4/R3 (n = 5) residents. Overall, the test discriminated effectively between interns (average score = 66%) and advanced residents (R2 = 86%, R3 = 89%; P < .05). Item analysis indicated discrimination indices ranging from -0.72 to 48.3 (mean = 3.12, median 0.58) for individual questions, including four questions with negative discrimination indices. After removal of the negatively indexed questions, the overall predictive value of the instrument persisted and discrimination indices increased for all but one of the remaining questions (range 0.027-70.8, mean 5.76, median 0.94).
Validation of an initial iteration of an assessment of basic image-interpretation skills led to revisions that improved the test. The results offer a specific test of radiologic reading skills with validation evidence for residents. More generally, results demonstrate a principled approach to test development.
尽管放射科的覆盖范围不断扩大,但非放射科住院医师仍继续独立对基础放射学检查进行初步解读,然而他们准确进行解读的能力并未得到常规评估。
通过反复迭代过程开发了一项基础放射影像解读的在线测试。确定了教育目标,然后收集问题和图像以创建评估。该测试在线施测于来自14个不同专业的一年级实习生(研究生一年级[PGY]1),以及三年级和四年级放射科住院医师(PGY3/R2和PGY4/R3)的样本。
在两年时间里,对368名住院医师进行了评估,包括PGY1(n = 349)、PGY3/R2(n = 14)和PGY4/R3(n = 5)住院医师。总体而言,该测试有效地区分了实习生(平均得分 = 66%)和高年级住院医师(R2 = 86%,R3 = 89%;P < 0.05)。项目分析表明,单个问题的区分指数范围为 -0.72至48.3(平均值 = 3.12,中位数0.58),其中包括四个区分指数为负的问题。去除区分指数为负的问题后,该工具的总体预测价值仍然存在,除一个剩余问题外,其余所有问题的区分指数均有所增加(范围0.027 - 70.8,平均值5.76,中位数0.94)。
对基础图像解读技能评估的初始迭代进行验证后进行了修订,从而改进了测试。结果为住院医师提供了一项具有验证证据的放射学阅读技能的特定测试。更广泛地说,结果展示了一种有原则的测试开发方法。