Heimann Tobias, Mountney Peter, John Matthias, Ionasec Razvan
Siemens AG, Corporate Technology, Erlangen, Germany.
Siemens Corporation, Corporate Technology, Princeton, NJ, USA.
Med Image Comput Comput Assist Interv. 2013;16(Pt 3):49-56. doi: 10.1007/978-3-642-40760-4_7.
The fusion of image data from trans-esophageal echography (TEE) and X-ray fluoroscopy is attracting increasing interest in minimally-invasive treatment of structural heart disease. In order to calculate the needed transform between both imaging systems, we employ a discriminative learning based approach to localize the TEE transducer in X-ray images. Instead of time-consuming manual labeling, we generate the required training data automatically from a single volumetric image of the transducer. In order to adapt this system to real X-ray data, we use unlabeled fluoroscopy images to estimate differences in feature space density and correct covariate shift by instance weighting. An evaluation on more than 1900 images reveals that our approach reduces detection failures by 95% compared to cross validation on the test set and improves the localization error from 1.5 to 0.8 mm. Due to the automatic generation of training data, the proposed system is highly flexible and can be adapted to any medical device with minimal efforts.
经食管超声心动图(TEE)与X射线透视图像数据的融合在结构性心脏病的微创治疗中越来越受到关注。为了计算两个成像系统之间所需的变换,我们采用基于判别学习的方法在X射线图像中定位TEE换能器。我们不再进行耗时的手动标注,而是从换能器的单个体积图像自动生成所需的训练数据。为了使该系统适应真实的X射线数据,我们使用未标注的透视图像来估计特征空间密度的差异,并通过实例加权校正协变量偏移。对1900多张图像的评估表明,与测试集上的交叉验证相比,我们的方法将检测失败率降低了95%,并将定位误差从1.5毫米提高到0.8毫米。由于训练数据的自动生成,所提出的系统具有高度的灵活性,并且可以轻松地适应任何医疗设备。