Thompson Christopher C, Jirapinyo Pichamol, Kumar Nitin, Ou Amy, Camacho Andrew, Lengyel Balazs, Ryan Michele B
Division of Gastroenterology, Brigham and Women's Hospital, Boston, Massachusetts, USA.
Department of Radiology, Brigham and Women's Hospital, Boston, Massachusetts, USA.
Endoscopy. 2014 Sep;46(9):735-44. doi: 10.1055/s-0034-1365463. Epub 2014 Apr 25.
There is currently no objective and validated methodology available to assess the progress of endoscopy trainees or to determine when technical competence has been achieved. The aims of the current study were to develop an endoscopic part-task simulator and to assess scoring system validity.
Fundamental endoscopic skills were determined via kinematic analysis, literature review, and expert interviews. Simulator prototypes and scoring systems were developed to reflect these skills. Validity evidence for content, internal structure, and response process was evaluated.
The final training box consisted of five modules (knob control, torque, retroflexion, polypectomy, and navigation and loop reduction). A total of 5 minutes were permitted per module with extra points for early completion. Content validity index (CVI)-realism was 0.88, CVI-relevance was 1.00, and CVI-representativeness was 0.88, giving a composite CVI of 0.92. Overall, 82 % of participants considered the simulator to be capable of differentiating between ability levels, and 93 % thought the simulator should be used to assess ability prior to performing procedures in patients. Inter-item assessment revealed correlations from 0.67 to 0.93, suggesting that tasks were sufficiently correlated to assess the same underlying construct, with each task remaining independent. Each module represented 16.0 % - 26.1 % of the total score, suggesting that no module contributed disproportionately to the composite score. Average box scores were 272.6 and 284.4 (P = 0.94) when performed sequentially, and average score for all participants with proctor 1 was 297.6 and 308.1 with proctor 2 (P = 0.94), suggesting reproducibility and minimal error associated with test administration.
A part-task training box and scoring system were developed to assess fundamental endoscopic skills, and validity evidence regarding content, internal structure, and response process was demonstrated.
目前尚无客观且经过验证的方法可用于评估内镜培训学员的进展情况或确定其何时具备技术能力。本研究的目的是开发一种内镜部分任务模拟器并评估评分系统的有效性。
通过运动学分析、文献综述和专家访谈确定基本的内镜技能。开发模拟器原型和评分系统以反映这些技能。评估了内容、内部结构和反应过程的有效性证据。
最终的训练箱由五个模块组成(旋钮控制、扭矩、反转、息肉切除以及导航和圈套复位)。每个模块允许5分钟时间,提前完成可获得额外加分。内容效度指数(CVI)-逼真度为0.88,CVI-相关性为1.00,CVI-代表性为0.88,综合CVI为0.92。总体而言,82%的参与者认为模拟器能够区分能力水平,93%的人认为在对患者进行操作之前应使用模拟器来评估能力。项目间评估显示相关性在0.67至0.93之间,表明任务之间具有足够的相关性以评估相同的潜在结构,且每个任务保持独立。每个模块占总分值的16.0%-26.1%,表明没有哪个模块对综合得分的贡献过大。依次进行操作时,训练箱平均得分分别为272.6和284.4(P=0.94),监考员1监考时所有参与者的平均得分为297.6,监考员2监考时为308.1(P=0.94),表明具有可重复性且与测试实施相关的误差最小。
开发了一种部分任务训练箱和评分系统来评估基本的内镜技能,并证明了关于内容、内部结构和反应过程的有效性证据。