De Luca V, Grabner H, Petrusca L, Salomir R, Székely G, Tanner C
Computer Vision Laboratory, ETH Zürich, 8092 Zürich, Switzerland.
Med Image Comput Comput Assist Interv. 2011;14(Pt 1):597-604. doi: 10.1007/978-3-642-23623-5_75.
We propose an unconventional approach for transferring of information between multi-modal images. It exploits the temporal commonality of multi-modal images acquired from the same organ during free-breathing. Strikingly there is no need for capturing the same region by the modalities. The method is based on extracting a low-dimensional description of the image sequences, selecting the common cause signal (breathing) for both modalities and finding the most similar sub-sequences for predicting image feature location. The approach was evaluated for 3 volunteers on sequences of 2D MRI and 2D US images of the liver acquired at different locations. Simultaneous acquisition of these images allowed for quantitative evaluation (predicted versus ground truth MRI feature locations). The best performance was achieved with signal extraction by slow feature analysis resulting in an average error of 2.6 mm (4.2 mm) for sequences acquired at the same (a different) time.
我们提出了一种用于在多模态图像之间传输信息的非常规方法。它利用了在自由呼吸期间从同一器官获取的多模态图像的时间共性。引人注目的是,各模态无需捕获相同区域。该方法基于提取图像序列的低维描述,为两种模态选择共同的成因信号(呼吸),并找到最相似的子序列以预测图像特征位置。针对3名志愿者,对在不同位置获取的肝脏二维磁共振成像(MRI)和二维超声(US)图像序列进行了该方法的评估。这些图像的同步采集允许进行定量评估(预测的MRI特征位置与真实情况对比)。通过慢特征分析进行信号提取取得了最佳性能,对于在相同(不同)时间采集的序列,平均误差为2.6毫米(4.2毫米)。