Alonso-Silverio Gustavo A, Pérez-Escamirosa Fernando, Bruno-Sanchez Raúl, Ortiz-Simon José L, Muñoz-Guerrero Roberto, Minor-Martinez Arturo, Alarcón-Paredes Antonio
1 Universidad Autónoma de Guerrero, Chilpancingo, Guerrero, México.
2 Universidad Nacional Autónoma de México UNAM, Ciudad de México, México.
Surg Innov. 2018 Aug;25(4):380-388. doi: 10.1177/1553350618777045. Epub 2018 May 29.
A trainer for online laparoscopic surgical skills assessment based on the performance of experts and nonexperts is presented. The system uses computer vision, augmented reality, and artificial intelligence algorithms, implemented into a Raspberry Pi board with Python programming language.
Two training tasks were evaluated by the laparoscopic system: transferring and pattern cutting. Computer vision libraries were used to obtain the number of transferred points and simulated pattern cutting trace by means of tracking of the laparoscopic instrument. An artificial neural network (ANN) was trained to learn from experts and nonexperts' behavior for pattern cutting task, whereas the assessment of transferring task was performed using a preestablished threshold. Four expert surgeons in laparoscopic surgery, from hospital "Raymundo Abarca Alarcón," constituted the experienced class for the ANN. Sixteen trainees (10 medical students and 6 residents) without laparoscopic surgical skills and limited experience in minimal invasive techniques from School of Medicine at Universidad Autónoma de Guerrero constituted the nonexperienced class. Data from participants performing 5 daily repetitions for each task during 5 days were used to build the ANN.
The participants tend to improve their learning curve and dexterity with this laparoscopic training system. The classifier shows mean accuracy and receiver operating characteristic curve of 90.98% and 0.93, respectively. Moreover, the ANN was able to evaluate the psychomotor skills of users into 2 classes: experienced or nonexperienced.
We constructed and evaluated an affordable laparoscopic trainer system using computer vision, augmented reality, and an artificial intelligence algorithm. The proposed trainer has the potential to increase the self-confidence of trainees and to be applied to programs with limited resources.
本文介绍了一种基于专家和非专家表现的在线腹腔镜手术技能评估训练器。该系统采用计算机视觉、增强现实和人工智能算法,通过Python编程语言在树莓派板上实现。
腹腔镜系统评估了两项训练任务:转移和图案切割。利用计算机视觉库通过跟踪腹腔镜器械来获取转移点的数量和模拟图案切割轨迹。训练了一个人工神经网络(ANN),以学习专家和非专家在图案切割任务中的行为,而转移任务的评估则使用预先设定的阈值。来自“雷蒙多·阿瓦尔卡·阿拉孔”医院的四名腹腔镜手术专家组成了ANN的经验丰富组。来自格雷罗自治大学医学院的16名没有腹腔镜手术技能且微创技术经验有限的学员(10名医学生和6名住院医师)组成了无经验组。使用参与者在5天内每天对每项任务进行5次重复操作的数据来构建ANN。
使用该腹腔镜训练系统,参与者的学习曲线和灵活性趋于改善。分类器的平均准确率和受试者工作特征曲线分别为90.98%和0.93。此外,ANN能够将用户的心理运动技能评估为两类:有经验或无经验。
我们构建并评估了一种使用计算机视觉、增强现实和人工智能算法的经济实惠的腹腔镜训练系统。所提出的训练器有可能提高学员的自信心,并应用于资源有限的项目。