• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

基于卡尔曼滤波的手术场景中目标检测通用算法方法,以辅助机器人辅助腹腔镜手术中外科医生的态势感知。

A Kalman-Filter-Based Common Algorithm Approach for Object Detection in Surgery Scene to Assist Surgeon's Situation Awareness in Robot-Assisted Laparoscopic Surgery.

机构信息

Department of Biomedical Engineering, Seoul National University, Seoul, Republic of Korea.

Biomedical Engineering Research Center, Department of Convergence Medicine, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea.

出版信息

J Healthc Eng. 2018 May 2;2018:8079713. doi: 10.1155/2018/8079713. eCollection 2018.

DOI:10.1155/2018/8079713
PMID:29854366
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC5954863/
Abstract

Although the use of the surgical robot is rapidly expanding for various medical treatments, there still exist safety issues and concerns about robot-assisted surgeries due to limited vision through a laparoscope, which may cause compromised situation awareness and surgical errors requiring rapid emergency conversion to open surgery. To assist surgeon's situation awareness and preventive emergency response, this study proposes situation information guidance through a vision-based common algorithm architecture for automatic detection and tracking of intraoperative hemorrhage and surgical instruments. The proposed common architecture comprises the location of the object of interest using feature texture, morphological information, and the tracking of the object based on Kalman filter for robustness with reduced error. The average recall and precision of the instrument detection in four prostate surgery videos were 96% and 86%, and the accuracy of the hemorrhage detection in two prostate surgery videos was 98%. Results demonstrate the robustness of the automatic intraoperative object detection and tracking which can be used to enhance the surgeon's preventive state recognition during robot-assisted surgery.

摘要

尽管手术机器人在各种医疗治疗中的应用迅速扩大,但由于腹腔镜的有限视野,机器人辅助手术仍然存在安全问题和担忧,这可能导致情况意识受损和手术错误,需要迅速紧急转为开放手术。为了帮助外科医生提高情况意识和进行预防性应急响应,本研究提出了一种基于视觉的通用算法架构的情况信息指导,用于自动检测和跟踪术中出血和手术器械。所提出的通用架构包括使用特征纹理、形态学信息的感兴趣对象的位置,以及基于卡尔曼滤波器的对象跟踪,以实现鲁棒性和减少误差。在四个前列腺手术视频中,器械检测的平均召回率和精度分别为 96%和 86%,在两个前列腺手术视频中,出血检测的准确率为 98%。结果表明,自动术中物体检测和跟踪具有鲁棒性,可以用于增强机器人辅助手术中外科医生的预防性状态识别。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e067/5954863/28cb6d0071a7/JHE2018-8079713.006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e067/5954863/9282fd953efb/JHE2018-8079713.001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e067/5954863/8d33f4debefc/JHE2018-8079713.002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e067/5954863/37ccd0580a4f/JHE2018-8079713.003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e067/5954863/73e4972690a2/JHE2018-8079713.004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e067/5954863/c461c1dc661c/JHE2018-8079713.005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e067/5954863/28cb6d0071a7/JHE2018-8079713.006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e067/5954863/9282fd953efb/JHE2018-8079713.001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e067/5954863/8d33f4debefc/JHE2018-8079713.002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e067/5954863/37ccd0580a4f/JHE2018-8079713.003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e067/5954863/73e4972690a2/JHE2018-8079713.004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e067/5954863/c461c1dc661c/JHE2018-8079713.005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e067/5954863/28cb6d0071a7/JHE2018-8079713.006.jpg

相似文献

1
A Kalman-Filter-Based Common Algorithm Approach for Object Detection in Surgery Scene to Assist Surgeon's Situation Awareness in Robot-Assisted Laparoscopic Surgery.基于卡尔曼滤波的手术场景中目标检测通用算法方法,以辅助机器人辅助腹腔镜手术中外科医生的态势感知。
J Healthc Eng. 2018 May 2;2018:8079713. doi: 10.1155/2018/8079713. eCollection 2018.
2
Object extraction via deep learning-based marker-free tracking framework of surgical instruments for laparoscope-holder robots.基于深度学习的无标记手术器械跟踪框架的目标提取,用于腹腔镜持镜机器人。
Int J Comput Assist Radiol Surg. 2020 Aug;15(8):1335-1345. doi: 10.1007/s11548-020-02214-y. Epub 2020 Jun 24.
3
Robot-assisted laparoscopic prostatectomy versus open radical retropubic prostatectomy: 24-month outcomes from a randomised controlled study.机器人辅助腹腔镜前列腺切除术与开放经耻骨后前列腺根治术的比较:一项随机对照研究的 24 个月结果。
Lancet Oncol. 2018 Aug;19(8):1051-1060. doi: 10.1016/S1470-2045(18)30357-7. Epub 2018 Jul 17.
4
Intraoperative registered transrectal ultrasound guidance for robot-assisted laparoscopic radical prostatectomy.术中配准经直肠超声引导机器人辅助腹腔镜根治性前列腺切除术。
J Urol. 2015 Jan;193(1):302-12. doi: 10.1016/j.juro.2014.05.124. Epub 2014 Aug 21.
5
Reducing robotic prostatectomy costs by minimizing instrumentation.通过减少器械使用来降低机器人前列腺切除术的成本。
J Endourol. 2015 May;29(5):556-60. doi: 10.1089/end.2014.0533. Epub 2014 Dec 2.
6
A novel noncontact detection method of surgeon's operation for a master-slave endovascular surgery robot.一种主从式血管内手术机器人的新型非接触式手术操作检测方法。
Med Biol Eng Comput. 2020 Apr;58(4):871-885. doi: 10.1007/s11517-020-02143-7. Epub 2020 Feb 19.
7
A CNN-based prototype method of unstructured surgical state perception and navigation for an endovascular surgery robot.基于 CNN 的血管内手术机器人非结构手术状态感知与导航原型方法。
Med Biol Eng Comput. 2019 Sep;57(9):1875-1887. doi: 10.1007/s11517-019-02002-0. Epub 2019 Jun 20.
8
Video processing to locate the tooltip position in surgical eye-hand coordination tasks.在眼科手术眼手协调任务中用于定位工具提示位置的视频处理
Surg Innov. 2015 Jun;22(3):285-93. doi: 10.1177/1553350614541859. Epub 2014 Jul 21.
9
Reality of nerve sparing and surgical margins in surgeons' early experience with robot-assisted radical prostatectomy in Japan.在日本外科医生早期开展机器人辅助根治性前列腺切除术的经验中,保留神经和手术切缘的实际情况。
Int J Urol. 2017 Mar;24(3):191-196. doi: 10.1111/iju.13281. Epub 2017 Jan 25.
10
[Methods for training of robot-assisted radical prostatectomy].[机器人辅助根治性前列腺切除术的培训方法]
Khirurgiia (Mosk). 2019(1):89-94. doi: 10.17116/hirurgia201901189.

本文引用的文献

1
Surgical-tools detection based on Convolutional Neural Network in laparoscopic robot-assisted surgery.基于卷积神经网络的腹腔镜机器人辅助手术中手术工具检测
Annu Int Conf IEEE Eng Med Biol Soc. 2017 Jul;2017:1756-1759. doi: 10.1109/EMBC.2017.8037183.
2
Tracking-by-detection of surgical instruments in minimally invasive surgery via the convolutional neural network deep learning-based method.基于卷积神经网络深度学习的微创手术中手术器械的跟踪检测。
Comput Assist Surg (Abingdon). 2017 Dec;22(sup1):26-35. doi: 10.1080/24699322.2017.1378777. Epub 2017 Sep 22.
3
Optical surgical instrument tracking system based on the principle of stereo vision.
基于立体视觉原理的光学手术器械跟踪系统。
J Biomed Opt. 2017 Jun 1;22(6):65005. doi: 10.1117/1.JBO.22.6.065005.
4
Detecting Surgical Tools by Modelling Local Appearance and Global Shape.通过建模局部外观和全局形状来检测手术工具。
IEEE Trans Med Imaging. 2015 Dec;34(12):2603-17. doi: 10.1109/TMI.2015.2450831.
5
The current state of miniature in vivo laparoscopic robotics.微型体内腹腔镜机器人技术的现状
J Robot Surg. 2007;1(1):45-9. doi: 10.1007/s11701-007-0019-9. Epub 2007 Feb 7.
6
An integrated approach to endoscopic instrument tracking for augmented reality applications in surgical simulation training.用于手术模拟培训中增强现实应用的内镜器械跟踪的集成方法。
Int J Med Robot. 2013 Dec;9(4):e34-51. doi: 10.1002/rcs.1485. Epub 2013 Jan 25.
7
In-vivo real-time tracking of surgical instruments in endoscopic video.内镜视频中手术器械的体内实时跟踪
Minim Invasive Ther Allied Technol. 2012 May;21(3):129-34. doi: 10.3109/13645706.2011.580764. Epub 2011 May 16.
8
Conventional laparoscopic and robot-assisted spleen-preserving pancreatectomy: does da Vinci have clinical advantages?传统腹腔镜与达芬奇机器人辅助保留脾脏的胰切除术:达芬奇机器人有临床优势吗?
Surg Endosc. 2011 Jun;25(6):2004-9. doi: 10.1007/s00464-010-1504-1. Epub 2010 Dec 7.
9
Modeling and segmentation of surgical workflow from laparoscopic video.基于腹腔镜视频的手术工作流程建模与分割
Med Image Comput Comput Assist Interv. 2010;13(Pt 3):400-7. doi: 10.1007/978-3-642-15711-0_50.
10
Robotic surgery in gynecology.妇科机器人手术
Scand J Surg. 2009;98(2):96-109. doi: 10.1177/145749690909800205.