Graduate School of Informatics, Kyoto University, Yoshida-Honmachi, Sakyo, Kyoto, 606-8501, Japan.
Nagoya University Hospital, 65 Tsurumai-cho, Showa-ku, Nagoya, 466-8550, Japan.
Comput Med Imaging Graph. 2024 Sep;116:102418. doi: 10.1016/j.compmedimag.2024.102418. Epub 2024 Jul 19.
Shape registration of patient-specific organ shapes to endoscopic camera images is expected to be a key to realizing image-guided surgery, and a variety of applications of machine learning methods have been considered. Because the number of training data available from clinical cases is limited, the use of synthetic images generated from a statistical deformation model has been attempted; however, the influence on estimation caused by the difference between synthetic images and real scenes is a problem. In this study, we propose a self-supervised offline learning framework for model-based registration using image features commonly obtained from synthetic images and real camera images. Because of the limited number of endoscopic images available for training, we use a synthetic image generated from the nonlinear deformation model that represents possible intraoperative pneumothorax deformations. In order to solve the difficulty in estimating deformed shapes and viewpoints from the common image features obtained from synthetic and real images, we attempted to improve the registration error by adding the shading and distance information that can be obtained as prior knowledge in the synthetic image. Shape registration with real camera images is performed by learning the task of predicting the differential model parameters between two synthetic images. The developed framework achieved registration accuracy with a mean absolute error of less than 10 mm and a mean distance of less than 5 mm in a thoracoscopic pulmonary cancer resection, confirming improved prediction accuracy compared with conventional methods.
将特定于患者的器官形状与内窥镜相机图像进行配准,有望成为实现图像引导手术的关键,因此已经考虑了各种机器学习方法的应用。由于从临床病例中获得的可用训练数据数量有限,因此尝试使用从统计变形模型生成的合成图像;然而,合成图像与真实场景之间的差异对估计的影响是一个问题。在这项研究中,我们提出了一种基于模型的配准的自监督离线学习框架,该框架使用从合成图像和真实相机图像中通常获得的图像特征。由于可用于训练的内窥镜图像数量有限,我们使用从代表术中气胸变形的非线性变形模型生成的合成图像。为了解决从合成和真实图像中获得的常见图像特征估计变形形状和视点的困难,我们尝试通过添加可以作为合成图像中先验知识获得的阴影和距离信息来提高配准误差。通过学习预测两个合成图像之间的微分模型参数的任务来执行与真实相机图像的形状配准。在胸腔镜肺癌切除术中,开发的框架实现了平均绝对误差小于 10 毫米和平均距离小于 5 毫米的配准精度,与传统方法相比,证实了预测精度的提高。