Suppr超能文献

基于模拟的结肠镜检查评估工具的开发与验证

Development and validation of a simulation-based assessment tool in colonoscopy.

作者信息

Jaensch Claudia, Jensen Rune D, Paltved Charlotte, Madsen Anders H

机构信息

Surgical Research Department, Regional Hospital Gødstrup, Herning, Denmark.

Department of Clinical Medicine, Aarhus University, Aarhus, Denmark.

出版信息

Adv Simul (Lond). 2023 Aug 10;8(1):19. doi: 10.1186/s41077-023-00260-5.

Abstract

BACKGROUND

Colonoscopy is difficult to learn. Virtual reality simulation training is helpful, but how and when novices should progress to patient-based training has yet to be established. To date, there is no assessment tool for credentialing novice endoscopists prior to clinical practice. The aim of this study was to develop such an assessment tool based on metrics provided by the simulator. The metrics used for the assessment tool should be able to discriminate between novices, intermediates, and experts and include essential checklist items for patient safety.

METHODS

The validation process was conducted based on the Standards for Educational and Psychological Testing. An expert panel decided upon three essential checklist items for patient safety based on Lawshe's method: perforation, hazardous tension to the bowel wall, and cecal intubation. A power calculation was performed. In this study, the Simbionix GI Mentor II simulator was used. Metrics with discriminatory ability were identified with variance analysis and combined to form an aggregate score. Based on this score and the essential items, pass/fail standards were set and reliability was tested.

RESULTS

Twenty-four participants (eight novices, eight intermediates, and eight expert endoscopists) performed two simulated colonoscopies. Four metrics with discriminatory ability were identified. The aggregate score ranged from 4.2 to 51.2 points. Novices had a mean score of 10.00 (SD 5.13), intermediates 24.63 (SD 7.91), and experts 30.72 (SD 11.98). The difference in score between novices and the other two groups was statistically significant (p<0.01). Although expert endoscopists had a higher score, the difference was not statistically significant (p=0.40). Reliability was good (Cronbach's alpha=0.86). A pass/fail score was defined at 17.1 points with correct completion of three essential checklist items, resulting in three experts and three intermediates failing and one novice passing the assessment.

CONCLUSION

We established a valid and reliable assessment tool with a pass/fail standard on the simulator. We suggest using the assessment after simulation-based training before commencing work-based learning.

摘要

背景

结肠镜检查术很难掌握。虚拟现实模拟训练很有帮助,但新手应如何以及何时过渡到基于患者的训练尚未确定。迄今为止,尚无用于在临床实践前对新手内镜医师进行资格认证的评估工具。本研究的目的是基于模拟器提供的指标开发这样一种评估工具。用于评估工具的指标应能够区分新手、中级人员和专家,并包括患者安全的基本检查清单项目。

方法

验证过程依据教育和心理测试标准进行。一个专家小组基于劳希方法确定了三项患者安全的基本检查清单项目:穿孔、肠壁危险张力和盲肠插管。进行了功效计算。在本研究中,使用了Simbionix GI Mentor II模拟器。通过方差分析确定具有鉴别能力的指标,并将其组合以形成总分。基于该分数和基本项目,设定了通过/未通过标准并测试了可靠性。

结果

24名参与者(8名新手、8名中级人员和8名专家内镜医师)进行了两次模拟结肠镜检查。确定了四项具有鉴别能力的指标。总分范围为4.2至51.2分。新手的平均分为10.00(标准差5.13),中级人员为24.63(标准差7.91),专家为30.72(标准差11.98)。新手与其他两组之间的分数差异具有统计学意义(p<0.01)。虽然专家内镜医师得分更高,但差异无统计学意义(p=0.40)。可靠性良好(克朗巴哈α系数=0.86)。通过正确完成三项基本检查清单项目将通过/未通过分数定义为17.1分,结果有三名专家和三名中级人员未通过评估,一名新手通过评估。

结论

我们在模拟器上建立了一个具有通过/未通过标准的有效且可靠的评估工具。我们建议在基于模拟的训练后、开始基于工作的学习之前使用该评估。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/396f/10413715/04b7e36392c0/41077_2023_260_Fig1_HTML.jpg

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验