Meier Andreas H, Gruessner Angelika, Cooney Robert N
Department of Surgery, Education Office, Upstate Medical University, Syracuse, New York.
Department of Surgery, Education Office, Upstate Medical University, Syracuse, New York.
J Surg Educ. 2016 Nov-Dec;73(6):e150-e157. doi: 10.1016/j.jsurg.2016.09.001.
Since July 2014 General Surgery residency programs have been required to use the Accreditation Council for Graduate Medical Education milestones twice annually to assess the progress of their trainees. We felt this change was a great opportunity to use this new evaluation tool for resident self-assessment and to furthermore engage the faculty in the educational efforts of the program.
We piloted the milestones with postgraduate year (PGY) II and IV residents during the 2013/2014 academic year to get faculty and residents acquainted with the instrument. In July 2014, we implemented the same protocol for all residents. Residents meet with their advisers quarterly. Two of these meetings are used for milestones assessment. The residents perform an independent self-evaluation and the adviser grades them independently. They discuss the evaluations focusing mainly on areas of greatest disagreement. The faculty member then presents the resident to the clinical competency committee (CCC) and the committee decides on the final scores and submits them to the Accreditation Council for Graduate Medical Education website. We stored all records anonymously in a MySQL database. We used Anova with Tukey post hoc analysis to evaluate differences between groups. We used intraclass correlation coefficients and Krippendorff's α to assess interrater reliability.
We analyzed evaluations for 44 residents. We created scale scores across all Likert items for each evaluation. We compared score differences by PGY level and raters (self, adviser, and CCC). We found highly significant increases of scores between most PGY levels (p < 0.05). There were no significant score differences per PGY level between the raters. The interrater reliability for the total score and 6 competency domains was very high (ICC: 0.87-0.98 and α: 0.84-0.97). Even though this milestone evaluation process added additional work for residents and faculty we had very good participation (93.9% by residents and 92.9% by faculty) and feedback was generally positive.
Even though implementation of the milestones has added additional work for general surgery residency programs, it has also opened opportunities to furthermore engage the residents in reflection and self-evaluation and to create additional venues for faculty to get involved with the educational process within the residency program. Using the adviser as the initial rater seems to correlate closely with the final CCC assessment. Self-evaluation by the resident is a requirement by the RRC and the milestones seem to be a good instrument to use for this purpose. Our early assessment suggests the milestones provide a useful instrument to track trainee progression through their residency.
自2014年7月起,普通外科住院医师培训项目被要求每年两次使用毕业后医学教育认证委员会(Accreditation Council for Graduate Medical Education)的里程碑标准来评估学员的进展情况。我们认为这一变化是一个很好的机会,可以利用这一新的评估工具进行住院医师自我评估,并进一步促使教员参与到项目的教育工作中。
在2013/2014学年,我们对二年级和四年级住院医师试用了里程碑标准,以使教员和住院医师熟悉该工具。2014年7月,我们对所有住院医师实施了相同的方案。住院医师每季度与他们的导师会面。其中两次会议用于里程碑评估。住院医师进行独立的自我评估,导师独立打分。他们讨论评估结果,主要关注分歧最大的领域。然后,教员将住院医师的情况提交给临床能力委员会(Clinical Competency Committee,CCC),委员会决定最终分数并将其提交到毕业后医学教育认证委员会网站。我们将所有记录匿名存储在一个MySQL数据库中。我们使用方差分析(Anova)和Tukey事后分析来评估组间差异。我们使用组内相关系数和克里彭多夫α系数(Krippendorff's α)来评估评分者间的可靠性。
我们分析了44名住院医师的评估结果。我们为每次评估的所有李克特量表项目创建了量表分数。我们比较了不同住院医师培训阶段(PGY)水平和评分者(自我、导师和CCC)之间的分数差异。我们发现大多数住院医师培训阶段水平之间的分数有显著提高(p < 0.05)。各评分者在每个住院医师培训阶段水平上的分数没有显著差异。总分和6个能力领域的评分者间可靠性非常高(组内相关系数:0.87 - 0.98;克里彭多夫α系数:0.84 - 0.97)。尽管这个里程碑评估过程给住院医师和教员增加了额外的工作,但我们的参与度很高(住院医师参与率为93.9%,教员参与率为92.9%),而且反馈总体上是积极的。
尽管实施里程碑标准给普通外科住院医师培训项目增加了额外工作,但它也为住院医师进一步参与反思和自我评估创造了机会,并为教员参与住院医师培训项目的教育过程提供了更多途径。将导师作为初始评分者似乎与CCC的最终评估密切相关。住院医师自我评估是毕业后医学教育评审委员会(Residency Review Committee,RRC) 的要求,而里程碑标准似乎是用于此目的的一个很好的工具。我们的早期评估表明,里程碑标准为跟踪学员在住院医师培训期间的进展提供了一个有用的工具。