School of Computer Science, Faculty of Engineering and Physical Sciences, University of Leeds, Leeds, LS2 9JT, United Kingdom.
Centre Hospitalier Universitaire de Clermont-Ferrand, Clermont-Ferrand, France.
Med Image Anal. 2025 Jan;99:103371. doi: 10.1016/j.media.2024.103371. Epub 2024 Oct 22.
Augmented reality for laparoscopic liver resection is a visualisation mode that allows a surgeon to localise tumours and vessels embedded within the liver by projecting them on top of a laparoscopic image. Preoperative 3D models extracted from Computed Tomography (CT) or Magnetic Resonance (MR) imaging data are registered to the intraoperative laparoscopic images during this process. Regarding 3D-2D fusion, most algorithms use anatomical landmarks to guide registration, such as the liver's inferior ridge, the falciform ligament, and the occluding contours. These are usually marked by hand in both the laparoscopic image and the 3D model, which is time-consuming and prone to error. Therefore, there is a need to automate this process so that augmented reality can be used effectively in the operating room. We present the Preoperative-to-Intraoperative Laparoscopic Fusion challenge (P2ILF), held during the Medical Image Computing and Computer Assisted Intervention (MICCAI 2022) conference, which investigates the possibilities of detecting these landmarks automatically and using them in registration. The challenge was divided into two tasks: (1) A 2D and 3D landmark segmentation task and (2) a 3D-2D registration task. The teams were provided with training data consisting of 167 laparoscopic images and 9 preoperative 3D models from 9 patients, with the corresponding 2D and 3D landmark annotations. A total of 6 teams from 4 countries participated in the challenge, whose results were assessed for each task independently. All the teams proposed deep learning-based methods for the 2D and 3D landmark segmentation tasks and differentiable rendering-based methods for the registration task. The proposed methods were evaluated on 16 test images and 2 preoperative 3D models from 2 patients. In Task 1, the teams were able to segment most of the 2D landmarks, while the 3D landmarks showed to be more challenging to segment. In Task 2, only one team obtained acceptable qualitative and quantitative registration results. Based on the experimental outcomes, we propose three key hypotheses that determine current limitations and future directions for research in this domain.
增强现实腹腔镜肝切除术是一种可视化模式,允许外科医生通过将肿瘤和血管投射到腹腔镜图像上来定位嵌入肝脏的肿瘤和血管。在这个过程中,从计算机断层扫描 (CT) 或磁共振 (MR) 成像数据中提取的术前 3D 模型被注册到术中腹腔镜图像上。关于 3D-2D 融合,大多数算法使用解剖学标志来引导注册,例如肝脏的下脊、镰状韧带和闭塞轮廓。这些标志通常在腹腔镜图像和 3D 模型中手动标记,这既耗时又容易出错。因此,需要自动化这个过程,以便增强现实能够在手术室中有效使用。我们提出了术前到术中腹腔镜融合挑战 (P2ILF),该挑战在 2022 年医学图像计算和计算机辅助干预 (MICCAI) 会议期间进行,旨在研究自动检测这些标志并将其用于注册的可能性。该挑战分为两个任务:(1) 2D 和 3D 标志分割任务和 (2) 3D-2D 注册任务。团队获得了由 167 张腹腔镜图像和 9 名患者的 9 个术前 3D 模型组成的训练数据,以及相应的 2D 和 3D 标志注释。共有来自 4 个国家的 6 个团队参加了该挑战,每个任务的结果都是独立评估的。所有团队都提出了用于 2D 和 3D 标志分割任务的深度学习方法和用于注册任务的可微渲染方法。在 16 张测试图像和来自 2 名患者的 2 个术前 3D 模型上评估了所提出的方法。在任务 1 中,团队能够分割大多数 2D 标志,而 3D 标志则更难分割。在任务 2 中,只有一个团队获得了可接受的定性和定量注册结果。根据实验结果,我们提出了三个关键假设,这些假设确定了当前在该领域研究的局限性和未来方向。