• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

A real-time approach for surgical activity recognition and prediction based on transformer models in robot-assisted surgery.

作者信息

Chen Ketai, Bandara D S V, Arata Jumpei

机构信息

Advanced Medical Devices Laboratory, Kyushu University, Nishi-ku, Fukuoka, 819-0382, Japan.

出版信息

Int J Comput Assist Radiol Surg. 2025 Apr;20(4):743-752. doi: 10.1007/s11548-024-03306-9. Epub 2025 Jan 12.

DOI:10.1007/s11548-024-03306-9
PMID:39799528
Abstract

PURPOSE

This paper presents a deep learning approach to recognize and predict surgical activity in robot-assisted minimally invasive surgery (RAMIS). Our primary objective is to deploy the developed model for implementing a real-time surgical risk monitoring system within the realm of RAMIS.

METHODS

We propose a modified Transformer model with the architecture comprising no positional encoding, 5 fully connected layers, 1 encoder, and 3 decoders. This model is specifically designed to address 3 primary tasks in surgical robotics: gesture recognition, prediction, and end-effector trajectory prediction. Notably, it operates solely on kinematic data obtained from the joints of robotic arm.

RESULTS

The model's performance was evaluated on JHU-ISI Gesture and Skill Assessment Working Set dataset, achieving highest accuracy of 94.4% for gesture recognition, 84.82% for gesture prediction, and significantly low distance error of 1.34 mm with a prediction of 1 s in advance. Notably, the computational time per iteration was minimal recorded at only 4.2 ms.

CONCLUSION

The results demonstrated the excellence of our proposed model compared to previous studies highlighting its potential for integration in real-time systems. We firmly believe that our model could significantly elevate realms of surgical activity recognition and prediction within RAS and make a substantial and meaningful contribution to the healthcare sector.

摘要

相似文献

1
A real-time approach for surgical activity recognition and prediction based on transformer models in robot-assisted surgery.
Int J Comput Assist Radiol Surg. 2025 Apr;20(4):743-752. doi: 10.1007/s11548-024-03306-9. Epub 2025 Jan 12.
2
Recognition and Prediction of Surgical Gestures and Trajectories Using Transformer Models in Robot-Assisted Surgery.在机器人辅助手术中使用Transformer模型识别和预测手术手势与轨迹
Rep U S. 2022 Oct;2022:8017-8024. doi: 10.1109/IROS47612.2022.9981611. Epub 2022 Dec 26.
3
Endoscopic Image-Based Skill Assessment in Robot-Assisted Minimally Invasive Surgery.基于内镜图像的机器人辅助微创手术技能评估。
Sensors (Basel). 2021 Aug 10;21(16):5412. doi: 10.3390/s21165412.
4
Automated surgical skill assessment in RMIS training.机器人微创外科手术训练中的自动手术技能评估。
Int J Comput Assist Radiol Surg. 2018 May;13(5):731-739. doi: 10.1007/s11548-018-1735-5. Epub 2018 Mar 16.
5
Deep learning with convolutional neural network for objective skill evaluation in robot-assisted surgery.基于卷积神经网络的深度学习在机器人辅助手术中的客观技能评估。
Int J Comput Assist Radiol Surg. 2018 Dec;13(12):1959-1970. doi: 10.1007/s11548-018-1860-1. Epub 2018 Sep 25.
6
Biomechanics-machine learning system for surgical gesture analysis and development of technologies for minimal access surgery.用于手术手势分析的生物力学-机器学习系统以及微创外科技术的开发。
Surg Innov. 2014 Oct;21(5):504-12. doi: 10.1177/1553350613510612. Epub 2013 Dec 2.
7
An Automated Skill Assessment Framework Based on Visual Motion Signals and a Deep Neural Network in Robot-Assisted Minimally Invasive Surgery.基于视觉运动信号和深度神经网络的机器人辅助微创手术自动化技能评估框架。
Sensors (Basel). 2023 May 5;23(9):4496. doi: 10.3390/s23094496.
8
Generalization of Deep Learning Gesture Classification in Robotic-Assisted Surgical Data: From Dry Lab to Clinical-Like Data.深度学习手势分类在机器人辅助手术数据中的泛化:从干实验室到类似临床的数据。
IEEE J Biomed Health Inform. 2022 Mar;26(3):1329-1340. doi: 10.1109/JBHI.2021.3117784. Epub 2022 Mar 7.
9
Cross-modal self-supervised representation learning for gesture and skill recognition in robotic surgery.机器人手术中手势和技能识别的跨模态自监督表示学习。
Int J Comput Assist Radiol Surg. 2021 May;16(5):779-787. doi: 10.1007/s11548-021-02343-y. Epub 2021 Mar 24.
10
A Dataset and Benchmarks for Segmentation and Recognition of Gestures in Robotic Surgery.机器人手术中手势分割与识别的数据集及基准
IEEE Trans Biomed Eng. 2017 Sep;64(9):2025-2041. doi: 10.1109/TBME.2016.2647680. Epub 2017 Jan 4.

本文引用的文献

1
A systematic review on artificial intelligence in robot-assisted surgery.人工智能在机器人辅助手术中的系统评价。
Int J Surg. 2021 Nov;95:106151. doi: 10.1016/j.ijsu.2021.106151. Epub 2021 Oct 22.
2
A Dataset and Benchmarks for Segmentation and Recognition of Gestures in Robotic Surgery.机器人手术中手势分割与识别的数据集及基准
IEEE Trans Biomed Eng. 2017 Sep;64(9):2025-2041. doi: 10.1109/TBME.2016.2647680. Epub 2017 Jan 4.
3
Emerging robotic platforms for minimally invasive surgery.新兴的微创手术机器人平台。
IEEE Rev Biomed Eng. 2013;6:111-26. doi: 10.1109/RBME.2012.2236311. Epub 2012 Dec 24.
4
Robotic surgery: a current perspective.机器人手术:当前视角
Ann Surg. 2004 Jan;239(1):14-21. doi: 10.1097/01.sla.0000103020.19595.7d.