Shao M H, Xu S K, Guo Y E, Lyu F Z, Ma X S, Xia X L, Wang H L, Jiang J Y
Department of Orthopedics, Huashan Hospital, Fudan University, Shanghai 200040, China.
Shanghai Maestro Surgical Robotics Co, Shanghai 201702, China.
Zhonghua Yi Xue Za Zhi. 2024 Oct 8;104(37):3513-3519. doi: 10.3760/cma.j.cn112137-20240408-00815.
To investigate the accuracy and efficiency of spine 2D/3D preoperative CT and intraoperative X-ray registration through a framework for spine 2D/3D single-vertebra navigation registration based on the fusion of dual-position image features. The preoperative CT and intraoperative anteroposterior (AP) and lateral (LAT) X-ray images of 140 lumbar spine patients who visited Huashan Hospital Affiliated to Fudan University from January 2020 to December 2023 were selected. In order to achieve rapid and high-precision single vertebra registration in clinical orthopedic surgery, a designed transformation parameter feature extraction module combined with a lightweight module of channel and spatial attention (CBAM) was used to accurately extract the local single vertebra image transformation information. Subsequently, the fusion regression module was used to complement the features of the anterior posterior (AP) and lateral (LAT) images to improve the accuracy of the registration parameter regression. Two 1×1 convolutions were used to reduce the parameter calculation amount, improve computational efficiency, and accelerate intraoperative registration time. Finally, the regression module outputed the final transformation parameters. Comparative experiments were conducted using traditional iterative methods (Opt-MI, Opt-NCC, Opt-C2F) and existing deep learning methods convolutional neural network (CNN) as control group. The registration accuracy (mRPD), registration time, and registration success rate were compared among the iterative methods. Through experiments on real CT data, the image-guided registration accuracy of the proposed method was verified. The method achieved a registration accuracy of (0.81±0.41) mm in the mRPD metric, a rotational angle error of 0.57°±0.24°, and a translation error of (0.41±0.21) mm. Through experimental comparisons on mainstream models, the selected DenseNet alignment accuracy was significantly better than ResNet as well as VGG (both <0.05). Compared to existing deep learning methods [mRPD: (2.97±0.99) mm, rotational angle error: 2.64°±0.54°, translation error: (2.15±0.41) mm, registration time: (0.03±0.05) seconds], the proposed method significantly improved registration accuracy (all <0.05). The registration success rate reached 97%, with an average single registration time of only (0.04±0.02) seconds. Compared to traditional iterative methods [mRPD: (0.78±0.26) mm, rotational angle error: 0.84°±0.57°, translation error: (1.05±0.28) mm, registration time: (35.5±10.5) seconds], registration efficiency of the proposed method was significantly improved (all <0.05). The dual-position study also compensated for the limitations in the single-view perspective, and significantly outperforms both the front and side single-view perspectives in terms of positional transformation parameter errors (both <0.05). Compared to existing methods, the proposed CT and X-ray registration method significantly reduces registration time while maintaining high registration accuracy, achieving efficient and precise single vertebra registration.
通过基于双位置图像特征融合的脊柱二维/三维单椎体导航配准框架,研究脊柱二维/三维术前CT与术中X线配准的准确性和效率。选取2020年1月至2023年12月在复旦大学附属华山医院就诊的140例腰椎患者的术前CT及术中前后位(AP)和侧位(LAT)X线图像。为在临床骨科手术中实现快速、高精度的单椎体配准,采用设计的变换参数特征提取模块结合通道和空间注意力轻量级模块(CBAM)准确提取局部单椎体图像变换信息。随后,使用融合回归模块对前后位(AP)和侧位(LAT)图像的特征进行补充,以提高配准参数回归的准确性。使用两个1×1卷积减少参数计算量,提高计算效率,加快术中配准时间。最后,回归模块输出最终变换参数。以传统迭代方法(Opt-MI、Opt-NCC、Opt-C2F)和现有的深度学习方法卷积神经网络(CNN)作为对照组进行对比实验。比较迭代方法之间的配准精度(mRPD)、配准时间和配准成功率。通过对真实CT数据的实验,验证了所提方法的图像引导配准精度。该方法在mRPD指标上实现了(0.81±0.41)mm的配准精度,旋转角度误差为0.57°±0.24°,平移误差为(0.41±0.21)mm。通过对主流模型的实验比较,所选的DenseNet对齐精度明显优于ResNet以及VGG(均P<0.05)。与现有深度学习方法相比[mRPD:(2.97±0.99)mm,旋转角度误差:2.64°±0.54°,平移误差:(2.15±0.41)mm,配准时间:(0.03±0.05)秒],所提方法显著提高了配准精度(均P<0.05)。配准成功率达到97%,平均单次配准时间仅为(0.04±0.02)秒。与传统迭代方法相比[mRPD:(0.78±0.26)mm,旋转角度误差:0.84°±0.57°,平移误差:(1.05±0.28)mm,配准时间:(35.5±10.5)秒],所提方法的配准效率显著提高(均P<0.05)。双位置研究还弥补了单视角的局限性,在位置变换参数误差方面显著优于前后单视角(均P<0.05)。与现有方法相比,所提的CT与X线配准方法在保持高配准精度的同时显著减少了配准时间,实现了高效、精确的单椎体配准。