Suppr超能文献

跷跷板:利用图神经网络从腹腔镜视频中学习软组织变形

SeeSaw: Learning Soft Tissue Deformation From Laparoscopy Videos With GNNs.

作者信息

Docea Reuben, Xu Jinjing, Ling Wei, Jenke Alexander C, Kolbinger Fiona R, Distler Marius, Riediger Carina, Weitz Jurgen, Speidel Stefanie, Pfeiffer Micha

出版信息

IEEE Trans Biomed Eng. 2024 Dec;71(12):3432-3445. doi: 10.1109/TBME.2024.3424771. Epub 2024 Nov 21.

Abstract

A major challenge in image-guided laparoscopic surgery is that structures of interest often deform and go, even if only momentarily, out of view. Methods which rely on having an up-to-date impression of those structures, such as registration or localisation, are undermined in these circumstances. This is particularly true for soft-tissue structures that continually change shape - in registration, they must often be re-mapped. Furthermore, methods which require 'revisiting' of previously seen areas cannot in principle function reliably in dynamic contexts, drastically weakening their uptake in the operating room. We present a novel approach for learning to estimate the deformed states of previously seen soft tissue surfaces from currently observable regions, using a combined approach that includes a Graph Neural Network (GNN). The training data is based on stereo laparoscopic surgery videos, generated semi-automatically with minimal labelling effort. Trackable segments are first identified using a feature detection algorithm, from which surface meshes are produced using depth estimation and delaunay triangulation. We show the method can predict the displacements of previously visible soft tissue structures connected to currently visible regions with observed displacements, both on patient data and porcine data. Our innovative approach learns to compensate non-rigidity in abdominal endoscopic scenes directly from stereo laparoscopic videos through targeting a new problem formulation, and stands to benefit a variety of target applications in dynamic environments.

摘要

图像引导腹腔镜手术中的一个主要挑战是,感兴趣的结构常常会变形,甚至会暂时消失在视野之外。在这种情况下,那些依赖于对这些结构有最新印象的方法,比如配准或定位,都会受到影响。对于不断改变形状的软组织结构来说尤其如此——在配准过程中,它们常常需要重新映射。此外,那些需要“重新访问”先前看到区域的方法在动态环境中原则上无法可靠地发挥作用,这极大地削弱了它们在手术室中的应用。我们提出了一种新颖的方法,使用包括图神经网络(GNN)在内的组合方法,从当前可观察区域学习估计先前看到的软组织表面的变形状态。训练数据基于立体腹腔镜手术视频,通过最少的标注工作半自动生成。首先使用特征检测算法识别可跟踪的片段,然后使用深度估计和德劳内三角剖分从这些片段生成表面网格。我们表明,该方法能够在患者数据和猪数据上,根据观察到的位移预测与当前可见区域相连的先前可见软组织结构的位移。我们的创新方法通过针对一个新的问题表述,直接从立体腹腔镜视频中学习补偿腹部内窥镜场景中的非刚性,有望惠及动态环境中的各种目标应用。

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验