Research in Orthopedic Computer Science, Balgrist University Hospital, University of Zurich, 8008 Zurich, Switzerland.
Research in Orthopedic Computer Science, Balgrist University Hospital, University of Zurich, 8008 Zurich, Switzerland.
Med Image Anal. 2024 Dec;98:103322. doi: 10.1016/j.media.2024.103322. Epub 2024 Aug 22.
In this study, we address critical barriers hindering the widespread adoption of surgical navigation in orthopedic surgeries due to limitations such as time constraints, cost implications, radiation concerns, and integration within the surgical workflow. Recently, our work X23D showed an approach for generating 3D anatomical models of the spine from only a few intraoperative fluoroscopic images. This approach negates the need for conventional registration-based surgical navigation by creating a direct intraoperative 3D reconstruction of the anatomy. Despite these strides, the practical application of X23D has been limited by a significant domain gap between synthetic training data and real intraoperative images. In response, we devised a novel data collection protocol to assemble a paired dataset consisting of synthetic and real fluoroscopic images captured from identical perspectives. Leveraging this unique dataset, we refined our deep learning model through transfer learning, effectively bridging the domain gap between synthetic and real X-ray data. We introduce an innovative approach combining style transfer with the curated paired dataset. This method transforms real X-ray images into the synthetic domain, enabling the in-silico-trained X23D model to achieve high accuracy in real-world settings. Our results demonstrated that the refined model can rapidly generate accurate 3D reconstructions of the entire lumbar spine from as few as three intraoperative fluoroscopic shots. The enhanced model reached a sufficient accuracy, achieving an 84% F1 score, equating to the benchmark set solely by synthetic data in previous research. Moreover, with an impressive computational time of just 81.1 ms, our approach offers real-time capabilities, vital for successful integration into active surgical procedures. By investigating optimal imaging setups and view angle dependencies, we have further validated the practicality and reliability of our system in a clinical environment. Our research represents a promising advancement in intraoperative 3D reconstruction. This innovation has the potential to enhance intraoperative surgical planning, navigation, and surgical robotics.
在这项研究中,我们解决了由于时间限制、成本问题、辐射问题以及与手术流程的集成等限制因素,导致手术导航在骨科手术中广泛采用的关键障碍。最近,我们的工作 X23D 展示了一种仅从几个术中透视图像生成脊柱 3D 解剖模型的方法。这种方法通过创建解剖结构的直接术中 3D 重建,消除了对传统基于配准的手术导航的需求。尽管取得了这些进展,但 X23D 的实际应用受到了合成训练数据与真实术中图像之间存在显著领域差距的限制。针对这一问题,我们设计了一种新的数据采集协议,以组装一个由合成和从相同视角捕获的真实透视图像组成的配对数据集。利用这个独特的数据集,我们通过迁移学习对我们的深度学习模型进行了改进,有效地弥合了合成和真实 X 射线数据之间的领域差距。我们提出了一种将风格转换与经过策展的配对数据集相结合的创新方法。这种方法将真实 X 射线图像转换为合成域,使在计算机上训练的 X23D 模型能够在真实环境中实现高精度。我们的结果表明,改进后的模型仅需三个术中透视图像即可快速生成整个腰椎的准确 3D 重建。该改进后的模型达到了足够的精度,实现了 84%的 F1 评分,与之前研究中仅使用合成数据的基准集相当。此外,我们的方法具有令人印象深刻的 81.1ms 的计算时间,实现了实时能力,这对于成功集成到主动手术过程中至关重要。通过研究最佳成像设置和视角依赖性,我们进一步验证了我们的系统在临床环境中的实用性和可靠性。我们的研究代表了术中 3D 重建的一项有前途的进展。这项创新有可能增强术中手术规划、导航和手术机器人技术。