Suppr超能文献

基于双交互 Wasserstein 生成对抗网络的双能 CT 物质分解方法。

A material decomposition method for dual-energy CT via dual interactive Wasserstein generative adversarial networks.

机构信息

School of Microelectronics, Tianjin University, Tianjin, 300072, China.

Tianjin Key Laboratory of Imaging and Sensing Microelectronic Technology, Tianjin, 300072, China.

出版信息

Med Phys. 2021 Jun;48(6):2891-2905. doi: 10.1002/mp.14828. Epub 2021 May 5.

Abstract

PURPOSE

Dual-energy computed tomography (DECT) is highly promising for material characterization and identification, whereas reconstructed material-specific images are affected by magnified noise and beam-hardening artifacts. Although various DECT material decomposition methods have been proposed to solve this problem, the quality of the decomposed images is still unsatisfactory, particularly in the image edges. In this study, a data-driven approach using dual interactive Wasserstein generative adversarial networks (DIWGAN) is developed to improve DECT decomposition accuracy and perform edge-preserving images.

METHODS

In proposed DIWGAN, two interactive generators are used to synthesize decomposed images of two basis materials by modeling the spatial and spectral correlations from input DECT reconstructed images, and the corresponding discriminators are employed to distinguish the difference between the generated images and labels. The DECT images reconstructed from high- and low-energy bins are sent to two generators separately, and each generator synthesizes one material-specific image, thereby ensuring the specificity of the network modeling. In addition, the information from different energy bins is exploited through the feature sharing of two generators. During decomposition model training, a hybrid loss function including L loss, edge loss, and adversarial loss is incorporated to preserve the texture and edges in the generated images. Additionally, a selector is employed to define the generator that should be trained in each iteration, which can ensure the modeling ability of two different generators and improve the material decomposition accuracy. The performance of the proposed method is evaluated using digital phantom, XCAT phantom, and real data from a mouse.

RESULTS

On the digital phantom, the regions of bone and soft tissue are strictly and accurately separated using the trained decomposition model. The material densities in different bone and soft-tissue regions are near the ground truth, and the error of material densities is lower than 3 mg/ml. The results from XCAT phantom show that the material-specific images generated by directed matrix inversion and iterative decomposition methods have severe noise and artifacts. Regarding to the learning-based methods, the decomposed images of fully convolutional network (FCN) and butterfly network (Butterfly-Net) still contain varying degrees of artifacts, while proposed DIWGAN can yield high quality images. Compared to Butterfly-Net, the root-mean-square error (RMSE) of soft-tissue images generated by the DIWGAN decreased by 0.01 g/ml, whereas the peak-signal-to-noise ratio (PSNR) and structural similarity (SSIM) of the soft-tissue images reached 31.43 dB and 0.9987, respectively. The mass densities of the decomposed materials are nearest to the ground truth when using the DIWGAN method. The noise standard deviation of the decomposition images reduced by 69%, 60%, 33%, and 21% compared with direct matrix inversion, iterative decomposition, FCN, and Butterfly-Net, respectively. Furthermore, the performance of the mouse data indicates the potential of the proposed material decomposition method in real scanned data.

CONCLUSIONS

A DECT material decomposition method based on deep learning is proposed, and the relationship between reconstructed and material-specific images is mapped by training the DIWGAN model. Results from both the simulation phantoms and real data demonstrate the advantages of this method in suppressing noise and beam-hardening artifacts.

摘要

目的

双能计算机断层扫描(DECT)在物质特征和识别方面具有很大的应用前景,而重建的特定物质图像会受到放大噪声和射束硬化伪影的影响。尽管已经提出了各种 DECT 材料分解方法来解决这个问题,但分解图像的质量仍然不尽如人意,特别是在图像边缘。在这项研究中,开发了一种基于双交互 Wasserstein 生成对抗网络(DIWGAN)的数据驱动方法,以提高 DECT 分解的准确性并实现边缘保留的图像。

方法

在提出的 DIWGAN 中,使用两个交互生成器通过对输入 DECT 重建图像的空间和光谱相关性进行建模来合成两种基础材料的分解图像,相应的鉴别器用于区分生成图像和标签之间的差异。从高低能 bin 重建的 DECT 图像分别发送到两个生成器,每个生成器合成一个特定材料的图像,从而确保网络建模的特异性。此外,通过两个生成器的特征共享利用不同能量 bin 的信息。在分解模型训练期间,采用包括 L 损失、边缘损失和对抗损失的混合损失函数来保留生成图像中的纹理和边缘。此外,使用选择器来定义在每个迭代中应训练的生成器,这可以确保两个不同生成器的建模能力,并提高材料分解的准确性。使用数字体模、XCAT 体模和来自小鼠的真实数据评估所提出方法的性能。

结果

在数字体模上,使用训练好的分解模型可以严格准确地分离骨和软组织区域。不同骨和软组织区域的材料密度接近真实值,材料密度误差低于 3mg/ml。XCAT 体模的结果表明,直接矩阵反演和迭代分解方法生成的特定物质图像存在严重的噪声和伪影。对于基于学习的方法,全卷积网络(FCN)和蝴蝶网络(Butterfly-Net)的分解图像仍然存在不同程度的伪影,而提出的 DIWGAN 可以生成高质量的图像。与 Butterfly-Net 相比,DIWGAN 生成的软组织图像的均方根误差(RMSE)降低了 0.01g/ml,而软组织图像的峰值信噪比(PSNR)和结构相似性(SSIM)分别达到 31.43dB 和 0.9987。使用 DIWGAN 方法时,分解材料的质量密度最接近真实值。与直接矩阵反演、迭代分解、FCN 和 Butterfly-Net 相比,分解图像的噪声标准差分别降低了 69%、60%、33%和 21%。此外,小鼠数据的性能表明了该方法在真实扫描数据中的应用潜力。

结论

提出了一种基于深度学习的 DECT 材料分解方法,并通过训练 DIWGAN 模型来映射重建图像和特定物质图像之间的关系。来自模拟体模和真实数据的结果都证明了该方法在抑制噪声和射束硬化伪影方面的优势。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验