Liu Haolin, Han Ye, Emerson Daniel, Rabin Yoed, Kara Levent Burak
Department of Mechanical Engineering, Carnegie Mellon University, Pittsburgh, Pennsylvania, United States of America.
PLoS One. 2025 Apr 14;20(4):e0319196. doi: 10.1371/journal.pone.0319196. eCollection 2025.
A method that allows a fast and accurate registration of digital tissue models obtained during preoperative, diagnostic imaging with those captured intraoperatively using lower-fidelity ultrasound imaging techniques is presented. Minimally invasive surgeries are often planned using preoperative, high-fidelity medical imaging techniques such as MRI and CT imaging. While these techniques allow clinicians to obtain detailed 3D models of the surgical region of interest (ROI), various factors such as physical changes to the tissue, changes in the body's configuration, or apparatus used during the surgery may cause large, non-linear deformations of the ROI. Such deformations of the tissue can result in a severe mismatch between the preoperatively obtained 3D model and the real-time image data acquired during surgery, potentially compromising surgical success. To overcome this challenge, this work presents a new approach for predicting intraoperative soft tissue deformations. The approach works by simply tracking the displacements of a handful of fiducial markers or analogous biological features embedded in the tissue, and produces a 3D deformed version of the high-fidelity ROI model that registers accurately with the intraoperative data. In an offline setting, we use the finite element method to generate deformation fields given various boundary conditions that mimic the realistic environment of soft tissues during a surgery. To reduce the dimensionality of the 3D deformation field involving thousands of degrees of freedom, we use an autoencoder neural network to encode each computed deformation field into a short latent space representation, such that a neural network can accurately map the fiducial marker displacements to the latent space. Our computational tests on a head and neck tumor, a kidney, and an aorta model show prediction errors as small as 0.5 mm. Considering that the typical resolution of interventional ultrasound is around 1 mm and each prediction takes less than 0.5 s, the proposed approach has the potential to be clinically relevant for an accurate tracking of soft tissue deformations during image-guided surgeries.
本文提出了一种方法,可实现术前诊断成像中获取的数字组织模型与术中使用低分辨率超声成像技术捕获的模型之间的快速、准确配准。微创外科手术通常使用术前高分辨率医学成像技术(如MRI和CT成像)进行规划。虽然这些技术使临床医生能够获得感兴趣手术区域(ROI)的详细三维模型,但诸如组织的物理变化、身体构型的改变或手术过程中使用的器械等各种因素,可能会导致ROI发生大的非线性变形。这种组织变形可能会导致术前获得的三维模型与手术期间获取的实时图像数据严重不匹配,从而可能影响手术的成功。为了克服这一挑战,本文提出了一种预测术中软组织变形的新方法。该方法通过简单地跟踪嵌入组织中的少数基准标记或类似生物特征的位移来工作,并生成与术中数据精确配准的高保真ROI模型的三维变形版本。在离线设置中,我们使用有限元方法在各种边界条件下生成变形场,这些边界条件模拟了手术期间软组织的实际环境。为了降低涉及数千个自由度的三维变形场的维度,我们使用自动编码器神经网络将每个计算出的变形场编码为一个短的潜在空间表示,以便神经网络能够将基准标记的位移精确映射到潜在空间。我们对头颈部肿瘤、肾脏和主动脉模型进行的计算测试显示,预测误差小至0.5毫米。考虑到介入超声的典型分辨率约为1毫米,且每次预测耗时不到0.5秒,所提出的方法有可能在临床应用中用于在图像引导手术期间准确跟踪软组织变形。