Mountney Peter, Lo Benny, Thiemjarus Surapa, Stoyanov Danail, Zhong-Yang Guang
Department of Computing, Imperial College, London SW7 2BZ, UK.
Med Image Comput Comput Assist Interv. 2007;10(Pt 2):34-41. doi: 10.1007/978-3-540-75759-7_5.
The use of vision based algorithms in minimally invasive surgery has attracted significant attention in recent years due to its potential in providing in situ 3D tissue deformation recovery for intra-operative surgical guidance and robotic navigation. Thus far, a large number of feature descriptors have been proposed in computer vision but direct application of these techniques to minimally invasive surgery has shown significant problems due to free-form tissue deformation and varying visual appearances of surgical scenes. This paper evaluates the current state-of-the-art feature descriptors in computer vision and outlines their respective performance issues when used for deformation tracking. A novel probabilistic framework for selecting the most discriminative descriptors is presented and a Bayesian fusion method is used to boost the accuracy and temporal persistency of soft-tissue deformation tracking. The performance of the proposed method is evaluated with both simulated data with known ground truth, as well as in vivo video sequences recorded from robotic assisted MIS procedures.
近年来,基于视觉的算法在微创手术中的应用因其在术中手术指导和机器人导航方面提供原位三维组织变形恢复的潜力而备受关注。到目前为止,计算机视觉领域已经提出了大量的特征描述符,但由于手术场景中自由形式的组织变形和视觉外观的变化,将这些技术直接应用于微创手术存在重大问题。本文评估了计算机视觉中当前最先进的特征描述符,并概述了它们在用于变形跟踪时各自的性能问题。提出了一种用于选择最具判别力描述符的新颖概率框架,并使用贝叶斯融合方法提高软组织变形跟踪的准确性和时间持久性。所提方法的性能通过具有已知地面真值的模拟数据以及从机器人辅助微创手术过程中记录的体内视频序列进行评估。