Suppr超能文献

使用卷积神经网络级联在机器人辅助手术中进行实时手术器械检测

Real-time surgical instrument detection in robot-assisted surgery using a convolutional neural network cascade.

作者信息

Zhao Zijian, Cai Tongbiao, Chang Faliang, Cheng Xiaolin

机构信息

School of Control Science and Engineering, Jinan, Shandong, People's Republic of China.

Laboratory of Laparoscopic Technique and Engineering, Qilu Hospital of Shandong University, Jinan, Shandong, People's Republic of China.

出版信息

Healthc Technol Lett. 2019 Nov 26;6(6):275-279. doi: 10.1049/htl.2019.0064. eCollection 2019 Dec.

Abstract

Surgical instrument detection in robot-assisted surgery videos is an import vision component for these systems. Most of the current deep learning methods focus on single-tool detection and suffer from low detection speed. To address this, the authors propose a novel frame-by-frame detection method using a cascading convolutional neural network (CNN) which consists of two different CNNs for real-time multi-tool detection. An hourglass network and a modified visual geometry group (VGG) network are applied to jointly predict the localisation. The former CNN outputs detection heatmaps representing the location of tool tip areas, and the latter performs bounding-box regression for tool tip areas on these heatmaps stacked with input RGB image frames. The authors' method is tested on the publicly available dataset and the dataset. The experimental results show that their method achieves better performance than mainstream detection methods in terms of detection accuracy and speed.

摘要

机器人辅助手术视频中的手术器械检测是这些系统的一个重要视觉组件。当前大多数深度学习方法专注于单工具检测,且检测速度较低。为解决这一问题,作者提出了一种新颖的逐帧检测方法,该方法使用级联卷积神经网络(CNN),它由两个不同的CNN组成,用于实时多工具检测。应用沙漏网络和改进的视觉几何组(VGG)网络来联合预测定位。前一个CNN输出表示工具尖端区域位置的检测热图,后一个CNN对堆叠有输入RGB图像帧的这些热图上的工具尖端区域执行边界框回归。作者的方法在公开可用的数据集和数据集上进行了测试。实验结果表明,他们的方法在检测精度和速度方面比主流检测方法具有更好的性能。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9047/6952255/c921c888ddfb/HTL.2019.0064.01.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验