Rasheed Hassan, Dorent Reuben, Fehrentz Maximilian, Morozov Daniil, Kapur Tina, Wells William M, Golby Alexandra, Frisken Sarah, Schnabel Julia A, Haouchine Nazim
Harvard Medical School, Brigham and Women's Hospital, Boston, MA, USA.
Technical University of Munich, Munich, Germany.
Simpl Med Ultrasound (2024). 2025;15186:78-87. doi: 10.1007/978-3-031-73647-6_8. Epub 2024 Oct 5.
We propose in this paper a texture-invariant 2D keypoints descriptor specifically designed for matching preoperative Magnetic Resonance (MR) images with intraoperative Ultrasound (US) images. We introduce a strategy, where intraoperative US images are synthesized from MR images accounting for multiple MR modalities and intraoperative US variability. We build our training set by enforcing keypoints localization over all images then train a patient-specific descriptor network that learns texture-invariant discriminant features in a supervised contrastive manner, leading to robust keypoints descriptors. Our experiments on real cases with ground truth show the effectiveness of the proposed approach, outperforming the state-of-the-art methods and achieving 80.35% matching precision on average.
在本文中,我们提出了一种纹理不变的二维关键点描述符,专门用于将术前磁共振(MR)图像与术中超声(US)图像进行匹配。我们引入了一种策略,即根据多种MR模态和术中US的变异性,从MR图像合成术中US图像。我们通过在所有图像上强制进行关键点定位来构建训练集,然后训练一个患者特定的描述符网络,该网络以监督对比的方式学习纹理不变的判别特征,从而得到鲁棒的关键点描述符。我们在有真实标注的实际病例上进行的实验表明了所提方法的有效性,优于现有方法,平均匹配精度达到80.35%。