IEEE Trans Vis Comput Graph. 2020 Jan;26(1):960-970. doi: 10.1109/TVCG.2019.2934369. Epub 2019 Aug 22.
This paper introduces a deep neural network based method, i.e., DeepOrganNet, to generate and visualize fully high-fidelity 3D / 4D organ geometric models from single-view medical images with complicated background in real time. Traditional 3D / 4D medical image reconstruction requires near hundreds of projections, which cost insufferable computational time and deliver undesirable high imaging / radiation dose to human subjects. Moreover, it always needs further notorious processes to segment or extract the accurate 3D organ models subsequently. The computational time and imaging dose can be reduced by decreasing the number of projections, but the reconstructed image quality is degraded accordingly. To our knowledge, there is no method directly and explicitly reconstructing multiple 3D organ meshes from a single 2D medical grayscale image on the fly. Given single-view 2D medical images, e.g., 3D / 4D-CT projections or X-ray images, our end-to-end DeepOrganNet framework can efficiently and effectively reconstruct 3D / 4D lung models with a variety of geometric shapes by learning the smooth deformation fields from multiple templates based on a trivariate tensor-product deformation technique, leveraging an informative latent descriptor extracted from input 2D images. The proposed method can guarantee to generate high-quality and high-fidelity manifold meshes for 3D / 4D lung models; while, all current deep learning based approaches on the shape reconstruction from a single image cannot. The major contributions of this work are to accurately reconstruct the 3D organ shapes from 2D single-view projection, significantly improve the procedure time to allow on-the-fly visualization, and dramatically reduce the imaging dose for human subjects. Experimental results are evaluated and compared with the traditional reconstruction method and the state-of-the-art in deep learning, by using extensive 3D and 4D examples, including both synthetic phantom and real patient datasets. The efficiency of the proposed method shows that it only needs several milliseconds to generate organ meshes with 10K vertices, which has great potential to be used in real-time image guided radiation therapy (IGRT).
本文提出了一种基于深度神经网络的方法,即 DeepOrganNet,可实时从单视图医学图像中生成和可视化具有复杂背景的全高保真 3D/4D 器官几何模型。传统的 3D/4D 医学图像重建需要近百个投影,这会耗费难以承受的计算时间,并对人体造成不理想的高成像/辐射剂量。此外,后续通常还需要进一步的显著处理过程来分割或提取准确的 3D 器官模型。通过减少投影数量可以降低计算时间和成像剂量,但重建图像质量也会相应降低。据我们所知,目前尚无方法可以直接从单张 2D 医学灰度图像中实时显式重建多个 3D 器官网格。给定单视图 2D 医学图像,例如 3D/4D-CT 投影或 X 射线图像,我们的端到端 DeepOrganNet 框架可以通过从基于三变量张量积变形技术的多个模板学习平滑变形场,并利用从输入 2D 图像中提取的信息丰富的潜在描述符,高效、有效地重建具有多种几何形状的 3D/4D 肺模型。所提出的方法可以保证生成高质量和高保真度的 3D/4D 肺模型流形网格;而当前所有基于单张图像形状重建的深度学习方法都无法保证。这项工作的主要贡献在于从 2D 单视图投影准确重建 3D 器官形状,显著提高处理时间以允许实时可视化,并大幅降低人体的成像剂量。通过使用广泛的 3D 和 4D 示例,包括合成体模和真实患者数据集,对实验结果进行了评估和比较,包括传统重建方法和深度学习的最新技术。所提出方法的效率表明,它仅需几毫秒即可生成具有 10K 个顶点的器官网格,这使其在实时图像引导放射治疗 (IGRT) 中具有很大的应用潜力。