Division of Plastic and Reconstructive Surgery, Department of Surgery, Rutgers-New Jersey Medical School, Newark, New Jersey.
Division of Plastic and Reconstructive Surgery, Department of Surgery, Rutgers-New Jersey Medical School, Newark, New Jersey.
J Surg Educ. 2021 Sep-Oct;78(5):1461-1468. doi: 10.1016/j.jsurg.2021.02.003. Epub 2021 Mar 17.
As the USMLE Step 1 Board exam moves to a pass/fail system there will be fewer objective measurements available to evaluate students applying to residency programs. Thus, there is a need for a reliable, validated method of screening applicants based on all areas of their application. To this end, we conducted a literature review search to examine previously described residency application screening tools.
A PubMed search was conducted using the keywords "residency," "applicant," "scoring," "algorithm," and "ranking." The search was limited to the last 10 years, and only articles written in English with the full-text available were included. The initial search yielded 512 results. Titles and abstracts were evaluated for inclusion and 11 articles met criteria for full-text evaluation. An additional 6 articles were excluded with reason following the full-text evaluation.
A total of 5 papers were included in our descriptive analysis. Villwock et al. used the open-source STAR algorithm to create an initial interview list based on program-specific desirable attributes. Bowe et al. attempted to develop a screening tool based on the 6 ACGME competencies that could accurately predict a resident's performance. Similarly, Lyons et al. worked with an outside consulting firm to develop a screening tool that used several situational judgment questions to assess desired competencies for first year residents. Schenker et al. developed an evaluation process that used a combination of a standardized screening tool and semistructured interviews to produce a final rank list. Hu et al. created a screening tool for pharmacology residency applicants based on specific domains.
Several residency application scoring systems have been evaluated for use in the initial screening process, but there is no consensus on which system is superior and whether or not those systems are successful in selecting the "best" candidates.
随着美国医师执照考试(USMLE)Step 1 考试转为通过/不通过制,评估申请住院医师项目的学生的客观衡量标准将会减少。因此,需要有一种可靠的、经过验证的方法,根据申请人申请材料的各个方面来筛选申请人。为此,我们进行了文献回顾搜索,以检查以前描述的住院医师申请筛选工具。
使用关键词“住院医师”、“申请人”、“评分”、“算法”和“排名”在 PubMed 上进行搜索。搜索范围限于过去 10 年,并且仅包括可提供全文的英语撰写的文章。最初的搜索产生了 512 个结果。评估标题和摘要以确定是否包含在内,有 11 篇文章符合全文评估标准。在全文评估后,又有 6 篇文章因其他原因被排除。
共有 5 篇论文被纳入我们的描述性分析。Villwock 等人使用开源 STAR 算法根据特定项目的理想属性创建初始面试名单。Bowe 等人试图开发一种基于 6 项 ACGME 能力的筛选工具,该工具可以准确预测住院医师的表现。同样,Lyons 等人与外部咨询公司合作开发了一种筛选工具,该工具使用了几个情境判断问题来评估一年级住院医师的理想能力。Schenker 等人开发了一种评估流程,该流程结合了标准化筛选工具和半结构化访谈,以生成最终的排名名单。Hu 等人根据特定领域为药理学住院医师申请人创建了筛选工具。
已经评估了几种住院医师申请评分系统,用于初始筛选过程,但对于哪种系统更优越以及这些系统是否成功选择“最佳”候选人,尚无共识。