Suppr超能文献

用于颅底手术中增强快速配准的头部姿态辅助面部标志点定位

Head pose-assisted localization of facial landmarks for enhanced fast registration in skull base surgery.

作者信息

Yang Yifei, Fan Jingfan, Fu Tianyu, Xiao Deqiang, Ma Dongsheng, Song Hong, Feng Zhengkai, Liu Youping, Yang Jian

机构信息

School of Optics and Photonics, Beijing Institute of Technology, Beijing, 100081, Beijing, PR China.

School of Optics and Photonics, Beijing Institute of Technology, Beijing, 100081, Beijing, PR China; Zhengzhou Research Institute, Beijing Institute of Technology, Zhengzhou, 450000, Henan, PR China.

出版信息

Comput Med Imaging Graph. 2025 Mar;120:102483. doi: 10.1016/j.compmedimag.2024.102483. Epub 2024 Dec 30.

Abstract

In skull base surgery, the method of using a probe to draw or 3D scanners to acquire intraoperative facial point clouds for spatial registration presents several issues. Manual manipulation results in inefficiency and poor consistency. Traditional registration algorithms based on point clouds are highly dependent on the initial pose. The complexity of registration algorithms can also extend the required time. To address these issues, we used an RGB-D camera to capture real-time facial point clouds during surgery. The initial registration of the 3D model reconstructed from preoperative CT/MR images and the point cloud collected during surgery is accomplished through corresponding facial landmarks. The facial point clouds collected intraoperatively often contain rotations caused by the free-angle camera. Benefit from the close spatial geometric relationship between head pose and facial landmarks coordinates, we propose a facial landmarks localization network assisted by estimating head pose. The shared representation head pose estimation module boosts network performance by enhancing its perception of global facial features. The proposed network facilitates the localization of landmark points in both preoperative and intraoperative point clouds, enabling rapid automatic registration. A free-view human facial landmarks dataset called 3D-FVL was synthesized from clinical CT images for training. The proposed network achieves leading localization accuracy and robustness on two public datasets and the 3D-FVL. In clinical experiments, using the Artec Eva scanner, the trained network achieved a concurrent reduction in average registration time to 0.28 s, with an average registration error of 2.33 mm. The proposed method significantly reduced registration time, while meeting clinical accuracy requirements for surgical navigation. Our research will help to improving the efficiency and quality of skull base surgery.

摘要

在颅底手术中,使用探针绘制或3D扫描仪获取术中面部点云进行空间配准的方法存在几个问题。手动操作效率低下且一致性差。基于点云的传统配准算法高度依赖初始姿态。配准算法的复杂性也会延长所需时间。为了解决这些问题,我们在手术过程中使用RGB-D相机实时捕捉面部点云。从术前CT/MR图像重建的3D模型与手术中采集的点云之间的初始配准通过相应的面部标志点来完成。术中采集的面部点云通常包含由自由角度相机引起的旋转。受益于头部姿态与面部标志点坐标之间紧密的空间几何关系,我们提出了一种通过估计头部姿态辅助的面部标志点定位网络。共享表示头部姿态估计模块通过增强其对全局面部特征的感知来提高网络性能。所提出的网络有助于术前和术中点云中标志点的定位,实现快速自动配准。从临床CT图像合成了一个名为3D-FVL的自由视角人类面部标志点数据集用于训练。所提出的网络在两个公共数据集和3D-FVL上实现了领先的定位精度和鲁棒性。在临床实验中,使用Artec Eva扫描仪,训练后的网络将平均配准时间同时缩短至0.28秒,平均配准误差为2.33毫米。所提出的方法显著缩短了配准时间,同时满足了手术导航的临床精度要求。我们的研究将有助于提高颅底手术的效率和质量。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验