Suppr超能文献

用于稀疏视图CBCT重建的几何感知衰减学习

Geometry-Aware Attenuation Learning for Sparse-View CBCT Reconstruction.

作者信息

Liu Zhentao, Fang Yu, Li Changjian, Wu Han, Liu Yuan, Shen Dinggang, Cui Zhiming

出版信息

IEEE Trans Med Imaging. 2025 Feb;44(2):1083-1097. doi: 10.1109/TMI.2024.3473970. Epub 2025 Feb 4.

Abstract

Cone Beam Computed Tomography (CBCT) plays a vital role in clinical imaging. Traditional methods typically require hundreds of 2D X-ray projections to reconstruct a high-quality 3D CBCT image, leading to considerable radiation exposure. This has led to a growing interest in sparse-view CBCT reconstruction to reduce radiation doses. While recent advances, including deep learning and neural rendering algorithms, have made strides in this area, these methods either produce unsatisfactory results or suffer from time inefficiency of individual optimization. In this paper, we introduce a novel geometry-aware encoder-decoder framework to solve this problem. Our framework starts by encoding multi-view 2D features from various 2D X-ray projections with a 2D CNN encoder. Leveraging the geometry of CBCT scanning, it then back-projects the multi-view 2D features into the 3D space to formulate a comprehensive volumetric feature map, followed by a 3D CNN decoder to recover 3D CBCT image. Importantly, our approach respects the geometric relationship between 3D CBCT image and its 2D X-ray projections during feature back projection stage, and enjoys the prior knowledge learned from the data population. This ensures its adaptability in dealing with extremely sparse view inputs without individual training, such as scenarios with only 5 or 10 X-ray projections. Extensive evaluations on two simulated datasets and one real-world dataset demonstrate exceptional reconstruction quality and time efficiency of our method.

摘要

锥形束计算机断层扫描(CBCT)在临床成像中起着至关重要的作用。传统方法通常需要数百个二维X射线投影来重建高质量的三维CBCT图像,这会导致相当大的辐射暴露。这使得人们对稀疏视图CBCT重建以降低辐射剂量的兴趣日益浓厚。虽然包括深度学习和神经渲染算法在内的最新进展在这一领域取得了进展,但这些方法要么产生不尽人意的结果,要么存在个体优化的时间效率问题。在本文中,我们引入了一种新颖的几何感知编码器-解码器框架来解决这个问题。我们的框架首先使用二维卷积神经网络(CNN)编码器对来自各种二维X射线投影的多视图二维特征进行编码。利用CBCT扫描的几何结构,然后将多视图二维特征反向投影到三维空间中,以形成一个全面的体积特征图,接着使用三维CNN解码器来恢复三维CBCT图像。重要的是,我们的方法在特征反向投影阶段尊重三维CBCT图像与其二维X射线投影之间的几何关系,并利用从数据总体中学习到的先验知识。这确保了它在处理极其稀疏的视图输入(如仅具有5个或10个X射线投影的场景)时无需个体训练的适应性。在两个模拟数据集和一个真实世界数据集上的广泛评估证明了我们方法卓越的重建质量和时间效率。

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验