Nosrati Masoud S, Amir-Khalili Alborz, Peyrat Jean-Marc, Abinahed Julien, Al-Alao Osama, Al-Ansari Abdulla, Abugharbieh Rafeef, Hamarneh Ghassan
Medical Image Analysis Lab, Simon Fraser University, Burnaby, BC, V5A 1S6, Canada.
BiSICL, University of British Columbia, Vancouver, BC, Canada.
Int J Comput Assist Radiol Surg. 2016 Aug;11(8):1409-18. doi: 10.1007/s11548-015-1331-x. Epub 2016 Feb 12.
Despite great advances in medical image segmentation, the accurate and automatic segmentation of endoscopic scenes remains a challenging problem. Two important aspects have to be considered in segmenting an endoscopic scene: (1) noise and clutter due to light reflection and smoke from cutting tissue, and (2) structure occlusion (e.g. vessels occluded by fat, or endophytic tumours occluded by healthy kidney tissue).
In this paper, we propose a variational technique to augment a surgeon's endoscopic view by segmenting visible as well as occluded structures in the intraoperative endoscopic view. Our method estimates the 3D pose and deformation of anatomical structures segmented from 3D preoperative data in order to align to and segment corresponding structures in 2D intraoperative endoscopic views. Our preoperative to intraoperative alignment is driven by, first, spatio-temporal, signal processing based vessel pulsation cues and, second, machine learning based analysis of colour and textural visual cues. To our knowledge, this is the first work that utilizes vascular pulsation cues for guiding preoperative to intraoperative registration. In addition, we incorporate a tissue-specific (i.e. heterogeneous) physically based deformation model into our framework to cope with the non-rigid deformation of structures that occurs during the intervention.
We validated the utility of our technique on fifteen challenging clinical cases with 45 % improvements in accuracy compared to the state-of-the-art method.
A new technique for localizing both visible and occluded structures in an endoscopic view was proposed and tested. This method leverages both preoperative data, as a source of patient-specific prior knowledge, as well as vasculature pulsation and endoscopic visual cues in order to accurately segment the highly noisy and cluttered environment of an endoscopic video. Our results on in vivo clinical cases of partial nephrectomy illustrate the potential of the proposed framework for augmented reality applications in minimally invasive surgeries.
尽管医学图像分割取得了巨大进展,但内镜场景的准确自动分割仍然是一个具有挑战性的问题。在内镜场景分割中必须考虑两个重要方面:(1)由于光反射和切割组织产生的烟雾导致的噪声和杂波;(2)结构遮挡(例如血管被脂肪遮挡,或内生性肿瘤被健康肾组织遮挡)。
在本文中,我们提出了一种变分技术,通过分割术中内镜视图中的可见结构和被遮挡结构来增强外科医生的内镜视野。我们的方法估计从术前3D数据中分割出的解剖结构的3D姿态和变形,以便与术中2D内镜视图中的相应结构对齐并进行分割。我们的术前到术中的对齐首先由基于时空信号处理的血管搏动线索驱动,其次由基于机器学习的颜色和纹理视觉线索分析驱动。据我们所知,这是第一项利用血管搏动线索来指导术前到术中配准的工作。此外,我们将基于组织特定(即异质)物理的变形模型纳入我们的框架,以应对干预过程中发生的结构非刚性变形。
我们在15个具有挑战性的临床病例上验证了我们技术的实用性,与现有技术相比,准确率提高了45%。
提出并测试了一种在内镜视图中定位可见和被遮挡结构的新技术。该方法利用术前数据作为患者特定先验知识的来源,以及血管搏动和内镜视觉线索,以便在高度嘈杂和杂乱的内镜视频环境中进行准确分割。我们在部分肾切除术的体内临床病例上的结果说明了所提出框架在微创手术增强现实应用中的潜力。