Department of Mechanical Engineering and Automation, Harbin Institute of Technology, Shenzhen 518055, People's Republic of China.
Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, People's Republic of China.
Phys Med Biol. 2022 Apr 1;67(8). doi: 10.1088/1361-6560/ac508d.
Tomography images are essential for clinical diagnosis and trauma surgery, allowing doctors to understand the internal information of patients in more detail. Since the large amount of x-ray radiation from the continuous imaging during the process of computed tomography scanning can cause serious harm to the human body, reconstructing tomographic images from sparse views becomes a potential solution to this problem. Here we present a deep-learning framework for tomography image reconstruction, namely TIReconNet, which defines image reconstruction as a data-driven supervised learning task that allows a mapping between the 2D projection view and the 3D volume to emerge from corpus. The proposed framework consists of four parts: feature extraction module, shape mapping module, volume generation module and super resolution module. The proposed framework combines 2D and 3D operations, which can generate high-resolution tomographic images with a relatively small amount of computing resources and maintain spatial information. The proposed method is verified on chest digitally reconstructed radiographs, and the reconstructed tomography images have achieved PSNR value of 18.621 ± 1.228 dB and SSIM value of 0.872 ± 0.041 when compared against the ground truth. In conclusion, an innovative convolutional neural network architecture is proposed and validated in this study, which proves that there is the potential to generate a 3D high-resolution tomographic image from a single 2D image using deep learning. This method may actively promote the application of reconstruction technology for radiation reduction, and further exploration of intraoperative guidance in trauma and orthopedics.
断层成像图像对于临床诊断和创伤手术至关重要,使医生能够更详细地了解患者的内部信息。由于计算机断层扫描扫描过程中连续成像的大量 X 射线辐射会对人体造成严重伤害,因此从稀疏视图重建断层图像成为解决此问题的一种潜在方法。在这里,我们提出了一种用于断层图像重建的深度学习框架,即 TIReconNet,它将图像重建定义为一个数据驱动的监督学习任务,允许从语料库中出现 2D 投影视图和 3D 体积之间的映射。所提出的框架由四个部分组成:特征提取模块、形状映射模块、体积生成模块和超分辨率模块。该框架结合了 2D 和 3D 操作,可以用相对较少的计算资源生成高分辨率断层图像,并保持空间信息。该方法在胸部数字重建射线照片上进行了验证,与真实值相比,重建的断层图像的 PSNR 值达到 18.621±1.228dB,SSIM 值达到 0.872±0.041。总之,本研究提出并验证了一种创新的卷积神经网络架构,证明了使用深度学习从单个 2D 图像生成 3D 高分辨率断层图像的潜力。该方法可能会积极推动减少辐射的重建技术的应用,并进一步探索创伤和骨科手术中的术中指导。