Suppr超能文献

基于卷积神经网络的深度学习在机器人辅助手术中的客观技能评估。

Deep learning with convolutional neural network for objective skill evaluation in robot-assisted surgery.

机构信息

Department of Mechanical Engineering, University of Texas at Dallas, Richardson, TX, 75080, USA.

Department of Surgery, UT Southwestern Medical Center, Dallas, TX, 75390, USA.

出版信息

Int J Comput Assist Radiol Surg. 2018 Dec;13(12):1959-1970. doi: 10.1007/s11548-018-1860-1. Epub 2018 Sep 25.

Abstract

PURPOSE

With the advent of robot-assisted surgery, the role of data-driven approaches to integrate statistics and machine learning is growing rapidly with prominent interests in objective surgical skill assessment. However, most existing work requires translating robot motion kinematics into intermediate features or gesture segments that are expensive to extract, lack efficiency, and require significant domain-specific knowledge.

METHODS

We propose an analytical deep learning framework for skill assessment in surgical training. A deep convolutional neural network is implemented to map multivariate time series data of the motion kinematics to individual skill levels.

RESULTS

We perform experiments on the public minimally invasive surgical robotic dataset, JHU-ISI Gesture and Skill Assessment Working Set (JIGSAWS). Our proposed learning model achieved competitive accuracies of 92.5%, 95.4%, and 91.3%, in the standard training tasks: Suturing, Needle-passing, and Knot-tying, respectively. Without the need of engineered features or carefully tuned gesture segmentation, our model can successfully decode skill information from raw motion profiles via end-to-end learning. Meanwhile, the proposed model is able to reliably interpret skills within a 1-3 second window, without needing an observation of entire training trial.

CONCLUSION

This study highlights the potential of deep architectures for efficient online skill assessment in modern surgical training.

摘要

目的

随着机器人辅助手术的出现,数据驱动方法在统计学和机器学习方面的融合作用迅速发展,人们对客观手术技能评估的兴趣日益浓厚。然而,大多数现有工作需要将机器人运动运动学转换为中间特征或手势段,这些特征或手势段提取成本高、效率低,并且需要大量的特定领域知识。

方法

我们提出了一种用于手术培训技能评估的分析深度学习框架。实现了一个深度卷积神经网络,将运动运动学的多元时间序列数据映射到个体技能水平。

结果

我们在公共微创手术机器人数据集 JHU-ISI 手势和技能评估工作集(JIGSAWS)上进行了实验。我们提出的学习模型在标准训练任务中的缝合、穿针和打结的准确率分别达到了 92.5%、95.4%和 91.3%。不需要工程特征或精心调整的手势分割,我们的模型可以通过端到端学习成功地从原始运动曲线中解码技能信息。同时,所提出的模型能够可靠地解释 1-3 秒窗口内的技能,而不需要观察整个训练试验。

结论

这项研究强调了深度架构在现代手术培训中进行高效在线技能评估的潜力。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验