Rosa Benoît, Bordoux Valentin, Nageotte Florent
ICube, CNRS, University of Strasbourg, INSA, Strasbourg, France.
Front Robot AI. 2019 Sep 6;6:86. doi: 10.3389/frobt.2019.00086. eCollection 2019.
The segmentation of continuum robots in medical images can be of interest for analyzing surgical procedures or for controlling them. However, the automatic segmentation of continuous and flexible shapes is not an easy task. On one hand conventional approaches are not adapted to the specificities of these instruments, such as imprecise kinematic models, and on the other hand techniques based on deep-learning showed interesting capabilities but need many manually labeled images. In this article we propose a novel approach for segmenting continuum robots on endoscopic images, which requires no prior on the instrument visual appearance and no manual annotation of images. The method relies on the use of the combination of kinematic models and differential kinematic models of the robot and the analysis of optical flow in the images. A cost function aggregating information from the acquired image, from optical flow and from robot encoders is optimized using particle swarm optimization and provides estimated parameters of the pose of the continuum instrument and a mask defining the instrument in the image. In addition a temporal consistency is assessed in order to improve stochastic optimization and reject outliers. The proposed approach has been tested for the robotic instruments of a flexible endoscopy platform both for benchtop acquisitions and an video. The results show the ability of the technique to correctly segment the instruments without a prior, and in challenging conditions. The obtained segmentation can be used for several applications, for instance for providing automatic labels for machine learning techniques.
在医学图像中对连续体机器人进行分割,对于分析手术过程或控制手术过程可能具有重要意义。然而,对连续且灵活的形状进行自动分割并非易事。一方面,传统方法并不适用于这些器械的特性,比如不精确的运动学模型;另一方面,基于深度学习的技术展现出了有趣的能力,但需要大量人工标注的图像。在本文中,我们提出了一种在内窥镜图像上分割连续体机器人的新方法,该方法无需事先了解器械的视觉外观,也无需对图像进行人工标注。该方法依赖于机器人运动学模型和微分运动学模型的结合使用以及图像中光流的分析。一个聚合了来自采集图像、光流和机器人编码器信息的代价函数,通过粒子群优化进行优化,并提供连续体器械姿态的估计参数以及定义图像中器械的掩码。此外,还评估了时间一致性,以改进随机优化并剔除异常值。所提出的方法已在一个柔性内窥镜平台的机器人器械上进行了测试,包括台式采集和视频测试。结果表明,该技术能够在无需事先了解的情况下,在具有挑战性的条件下正确分割器械。所获得的分割结果可用于多种应用,例如为机器学习技术提供自动标注。