Scheggi Stefano, Yoon ChangKyu, Ghosh Arijit, Gracias David H, Misra Sarthak
Department of Biomechanical Engineering, University of Twente, 7522 NB, The Netherlands.
Department of Materials Science and Engineering, The Johns Hopkins University, MD 21218, USA.
Rob Auton Syst. 2018 May;103:111-121. doi: 10.1016/j.robot.2017.11.003. Epub 2017 Dec 5.
Miniaturized grippers that possess an untethered structure are suitable for a wide range of tasks, ranging from micromanipulation and microassembly to minimally invasive surgical interventions. In order to robustly perform such tasks, it is critical to properly estimate their overall configuration. Previous studies on tracking and control of miniaturized agents estimated mainly their 2D pixel position, mostly using cameras and optical images as a feedback modality. This paper presents a novel solution to the problem of estimating and tracking the 3D position, orientation and configuration of the tips of submillimeter grippers from marker-less visual observations. We consider this as an optimization problem, which is solved using a variant of the Particle Swarm Optimization algorithm. The proposed approach has been implemented in a Graphics Processing Unit (GPU) which allows a user to track the submillimeter agents online. The proposed approach has been evaluated on several image sequences obtained from a camera and on B-mode ultrasound images obtained from an ultrasound probe. The sequences show the grippers moving, rotating, opening/closing and grasping biological material. Qualitative results obtained using both hydrogel (soft) and metallic (hard) grippers with different shapes and sizes ranging from 750 microns to 4 mm (tip to tip), demonstrate the capability of the proposed method to track the agent in all the video sequences. Quantitative results obtained by processing synthetic data reveal a tracking position error of 25 ± 7 m and orientation error of 1.7 ± 1.3 degrees. We believe that the proposed technique can be applied to different stimuli responsive miniaturized agents, allowing the user to estimate the full configuration of complex agents from visual marker-less observations.
具有无束缚结构的小型化夹具适用于广泛的任务,从微操作、微装配到微创手术干预。为了稳健地执行此类任务,正确估计其整体配置至关重要。先前关于小型化代理的跟踪和控制的研究主要估计其二维像素位置,大多使用相机和光学图像作为反馈模态。本文提出了一种新颖的解决方案,用于从无标记视觉观测中估计和跟踪亚毫米级夹具尖端的三维位置、方向和配置。我们将此视为一个优化问题,使用粒子群优化算法的一个变体来解决。所提出的方法已在图形处理单元(GPU)中实现,这允许用户在线跟踪亚毫米级代理。所提出的方法已在从相机获得的几个图像序列以及从超声探头获得的B模式超声图像上进行了评估。这些序列展示了夹具移动、旋转、打开/关闭以及抓取生物材料的过程。使用不同形状和尺寸(尖端到尖端范围从750微米到4毫米)的水凝胶(软)和金属(硬)夹具获得的定性结果,证明了所提出的方法在所有视频序列中跟踪代理的能力。通过处理合成数据获得的定量结果显示跟踪位置误差为25±7微米,方向误差为1.7±1.3度。我们相信所提出的技术可以应用于不同的刺激响应型小型化代理,允许用户从无标记视觉观测中估计复杂代理的完整配置。