Catherine & Joseph Aresty Department of Urology, USC Institute of Urology, University of Southern California, Los Angeles, California.
Department of Urology, Cedars-Sinai Medical Center, Los Angeles, California.
J Surg Educ. 2024 Mar;81(3):422-430. doi: 10.1016/j.jsurg.2023.12.002. Epub 2024 Jan 29.
Surgical skill assessment tools such as the End-to-End Assessment of Suturing Expertise (EASE) can differentiate a surgeon's experience level. In this simulation-based study, we define a competency benchmark for intraoperative robotic suturing using EASE as a validated measure of performance.
Participants conducted a dry-lab vesicourethral anastomosis (VUA) exercise. Videos were each independently scored by 2 trained, blinded reviewers using EASE. Inter-rater reliability was measured with prevalence-adjusted bias-adjusted Kappa (PABAK) using 2 example videos. All videos were reviewed by an expert surgeon, who determined if the suturing skills exhibited were at a competency level expected for residency graduation (pass or fail). The Contrasting Group (CG) method was then used to set a pass/fail score at the intercept of the pass and fail cohorts' EASE score distributions.
Keck School of Medicine, University of Southern California.
Twenty-six participants: 8 medical students, 8 junior residents (PGY 1-2), 7 senior residents (PGY 3-5) and 3 attending urologists.
After 1 round of consensus-building, average PABAK across EASE subskills was 0.90 (Range 0.67-1.0). The CG method produced a competency benchmark EASE score of >35/39, with a pass rate of 10/26 (38%); 27% were deemed competent by expert evaluation. False positives and negatives were defined as medical students who passed and attendings who failed the assessment, respectively. This pass/fail score produced no false positives or negatives, and fewer JR than SR were considered competent by both the expert and CG benchmark.
Using an absolute standard setting method, competency scores were set to identify trainees who could competently execute a standardized dry-lab robotic suturing exercise. This standard can be used for high stakes decisions regarding a trainee's technical readiness for independent practice. Future work includes validation of this standard in the clinical environment through correlation with clinical outcomes.
手术技能评估工具,如端到端缝合技能评估(EASE),可以区分外科医生的经验水平。在这项基于模拟的研究中,我们使用 EASE 作为性能的验证衡量标准,为术中机器人缝合定义了一个能力基准。
参与者进行了一项干燥的膀胱尿道吻合术(VUA)练习。视频由 2 名经过培训、盲目的审查员使用 EASE 进行独立评分。使用 2 个示例视频测量了具有普遍性的调整后偏倚调整 kapp(PABAK)的组内一致性。所有视频都由一位经验丰富的外科医生进行了审查,他确定所展示的缝合技能是否达到了住院医师毕业时预期的能力水平(通过或失败)。然后使用对比组(CG)方法在通过和失败队列的 EASE 得分分布的交叉点处设置通过/失败分数。
南加州大学凯克医学院。
26 名参与者:8 名医学生、8 名初级住院医师(PGY1-2)、7 名高级住院医师(PGY3-5)和 3 名主治泌尿科医生。
经过 1 轮共识建立,EASE 子技能的平均 PABAK 为 0.90(范围为 0.67-1.0)。CG 方法产生的能力基准 EASE 分数为>35/39,通过率为 10/26(38%);10/26(38%)的人被专家评估认为是有能力的。假阳性和假阴性分别定义为通过评估的医学生和未能通过评估的主治泌尿科医生。这个通过/失败分数没有假阳性或假阴性,并且通过专家和 CG 基准,认为有能力的初级住院医师比高级住院医师少。
使用绝对标准设定方法,确定了能够熟练执行标准化的机器人干实验室缝合练习的学员的能力分数。这个标准可用于高风险决策,例如学员是否准备好独立执业的技术水平。未来的工作包括通过与临床结果相关联,在临床环境中验证这个标准。