Vancouver, British Columbia, Canada; and Detroit, Mich. From the Division of Plastic Surgery, British Columbia Children's Hospital and University of British Columbia, and the Departments of Otolaryngology and Surgery and the School of Medicine, Wayne State University.
Plast Reconstr Surg. 2009 Dec;124(6):2179-2184. doi: 10.1097/PRS.0b013e3181bcf11f.
In-training evaluations in graduate medical education have typically been challenging. Although the majority of standardized examination delivery methods have become computer-based, in-training examinations generally remain pencil-paper-based, if they are performed at all. Audience response systems present a novel way to stimulate and evaluate the resident-learner. The purpose of this study was to assess the outcomes of audience response systems testing as compared with traditional testing in a plastic surgery residency program.
A prospective 1-year pilot study of 10 plastic surgery residents was performed using audience response systems-delivered testing for the first half of the academic year and traditional pencil-paper testing for the second half. Examination content was based on monthly "Core Quest" curriculum conferences. Quantitative outcome measures included comparison of pretest and posttest and cumulative test scores of both formats. Qualitative outcomes from the individual participants were obtained by questionnaire.
When using the audience response systems format, pretest and posttest mean scores were 67.5 and 82.5 percent, respectively; using traditional pencil-paper format, scores were 56.5 percent and 79.5 percent. A comparison of the cumulative mean audience response systems score (85.0 percent) and traditional pencil-paper score (75.0 percent) revealed statistically significantly higher scores with audience response systems (p = 0.01). Qualitative outcomes revealed increased conference enthusiasm, greater enjoyment of testing, and no user difficulties with the audience response systems technology.
The audience response systems modality of in-training evaluation captures participant interest and reinforces material more effectively than traditional pencil-paper testing does. The advantages include a more interactive learning environment, stimulation of class participation, immediate feedback to residents, and immediate tabulation of results for the educator. Disadvantages include start-up costs and lead-time preparation.
住院医师医学教育中的培训评估一直具有挑战性。尽管大多数标准化考试的交付方式都已实现计算机化,但如果进行培训评估,它们通常仍然是纸笔考试。观众反应系统为刺激和评估住院医师-学习者提供了一种新颖的方法。本研究旨在评估观众反应系统测试在整形外科住院医师培训计划中的结果与传统测试相比。
对 10 名整形外科住院医师进行了为期 1 年的前瞻性试点研究,在前半个学年使用观众反应系统进行测试,在后半个学年使用传统的纸笔测试。考试内容基于每月的“核心探索”课程会议。定量结果测量包括两种格式的预测试和后测试以及累积测试分数的比较。通过问卷调查获得个别参与者的定性结果。
使用观众反应系统格式时,预测试和后测试的平均分数分别为 67.5%和 82.5%;使用传统的纸笔格式,分数分别为 56.5%和 79.5%。观众反应系统格式的累积平均分数(85.0%)与传统纸笔格式的分数(75.0%)进行比较,结果显示观众反应系统的分数明显更高(p = 0.01)。定性结果表明,会议积极性更高,测试更有趣,并且没有用户对观众反应系统技术的困难。
培训评估中的观众反应系统模式比传统的纸笔测试更能吸引参与者的兴趣并更有效地强化材料。其优点包括更具互动性的学习环境、刺激课堂参与、向住院医师提供即时反馈以及为教育者即时汇总结果。缺点包括前期成本和准备时间。