Suppr超能文献

基于眼动的腹腔镜手术技能评估深度学习系统的开发。

The development of an eye movement-based deep learning system for laparoscopic surgical skills assessment.

机构信息

Department of Industrial Management, National Taiwan University of Science and Technology, Taipei, Taiwan.

Department of Data Science, Soochow University, No. 70, Linhsi Road, Shihlin District, Taipei City, 111, Taiwan.

出版信息

Sci Rep. 2022 Aug 15;12(1):11036. doi: 10.1038/s41598-022-15053-5.

Abstract

The development of valid, reliable, and objective methods of skills assessment is central to modern surgical training. Numerous rating scales have been developed and validated for quantifying surgical performance. However, many of these scoring systems are potentially flawed in their design in terms of reliability. Eye-tracking techniques, which provide a more objective investigation of the visual-cognitive aspects of the decision-making process, recently have been utilized in surgery domains for skill assessment and training, and their use has been focused on investigating differences between expert and novice surgeons to understand task performance, identify experienced surgeons, and establish training approaches. Ten graduate students at the National Taiwan University of Science and Technology with no prior laparoscopic surgical skills were recruited to perform the FLS peg transfer task. Then k-means clustering algorithm was used to split 500 trials into three dissimilar clusters, grouped as novice, intermediate, and expert levels, by an objective performance assessment parameter incorporating task duration with error score. Two types of data sets, namely, time series data extracted from coordinates of eye fixation and image data from videos, were used to implement and test our proposed skill level detection system with ensemble learning and a CNN algorithm. Results indicated that ensemble learning and the CNN were able to correctly classify skill levels with accuracies of 76.0% and 81.2%, respectively. Furthermore, the incorporation of coordinates of eye fixation and image data allowed the discrimination of skill levels with a classification accuracy of 82.5%. We examined more levels of training experience and further integrated an eye tracking technique and deep learning algorithms to develop a tool for objective assessment of laparoscopic surgical skill. With a relatively unbalanced sample, our results have demonstrated that the approach combining the features of visual fixation coordinates and images achieved a very promising level of performance for classifying skill levels of trainees.

摘要

开发有效、可靠和客观的技能评估方法是现代外科培训的核心。已经开发和验证了许多评分量表来量化手术表现。然而,许多这些评分系统在设计上可能存在可靠性缺陷。眼动跟踪技术提供了对决策过程的视觉认知方面的更客观的研究,最近已在手术领域中用于技能评估和培训,并且它们的使用重点是调查专家和新手外科医生之间的差异,以了解任务表现、识别有经验的外科医生和建立培训方法。台湾科技大学的 10 名没有腹腔镜手术经验的研究生被招募来完成 FLS 钉转移任务。然后,使用 k-均值聚类算法将 500 次试验分为三个不同的簇,通过将任务持续时间与错误分数相结合的客观绩效评估参数将其分组为新手、中级和专家水平。使用集成学习和 CNN 算法,我们使用两种类型的数据集,即从眼固定坐标提取的时间序列数据和视频中的图像数据,来实现和测试我们提出的技能水平检测系统。结果表明,集成学习和 CNN 能够分别以 76.0%和 81.2%的准确率正确分类技能水平。此外,眼动坐标和图像数据的融合允许以 82.5%的分类准确率区分技能水平。我们检查了更多的培训经验水平,并进一步整合了眼动跟踪技术和深度学习算法,开发了一种用于客观评估腹腔镜手术技能的工具。在一个相对不平衡的样本中,我们的结果表明,结合视觉固定坐标和图像特征的方法在分类受训者的技能水平方面表现出非常有前途的水平。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1799/9378740/c688ff642022/41598_2022_15053_Fig1_HTML.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验