Suppr超能文献

通过训练数据精选来保留噪声纹理,用于高分辨率心脏EID-CT的深度学习去噪

Preserving noise texture through training data curation for deep learning denoising of high-resolution cardiac EID-CT.

作者信息

Treb Kevin, Chang Shaojie, Koons Emily, Marsh Jeffrey, Foley Thomas, Williamson Eric, McCollough Cynthia, Leng Shuai

机构信息

Department of Radiology, Mayo Clinic, Rochester, Minnesota, USA.

出版信息

Med Phys. 2025 Jul;52(7):e17938. doi: 10.1002/mp.17938.

Abstract

BACKGROUND

To utilize high spatial resolution reconstructions for cardiac imaging at energy-integrating detector CT (EID)-CT with comparable noise to similar reconstructions at photon-counting detector (PCD)-CT, methods to control EID-CT image noise are needed. Supervised convolutional neural networks (CNN) have shown promise for denoising, but a challenge remains to efficiently create high-quality and unbiased estimates of noise without access to dedicated software or proprietary information, such that natural noise texture is retained in CNN-denoised CT images.

PURPOSE

This study aims to develop and test image-based noise estimation methods that can be used to train a CNN model, and to evaluate denoising performance and noise texture preservation for EID-CT coronary CT angiography (cCTA) images reconstructed with high-resolution kernels.

METHODS

U-net CNN models were trained for denoising. To supervise training, noise-only images were estimated directly from high-resolution kernel (Bv59) reconstructed EID-CT (HR EID-CT) patient images using two different methods: subtraction of low- and high-strength iterative reconstruction (IR); subtraction of adjacent image slices with the same IR strength. The noise estimates from these methods contain differing noise texture and anatomical information. Networks were trained and validated separately for three data sets: the training data from each of the two noise-estimation methods, and a 50%-50% partition of training data between the two methods. The trained models were applied to two sets of testing data: CT images of a uniform water phantom to measure noise power spectra (NPS), and an independent cohort of seven patient cCTA HR EID-CT exams. The denoised patient images were compared to standard resolution EID-CT reconstructions (Bv40). As a low-noise reference, patient images acquired on the same day with a PCD-CT and reconstructed using a similar kernel as HR EID-CT were used for comparison.

RESULTS

Models trained with each noise-image estimation method denoised the HR EID-CT images by 74%-79% to achieve a comparable noise magnitude to the HR PCD-CT images. The peak, average, and 10% peak frequencies of the NPS of the input images (6.08, 6.24, and 12.0 cm) were better approximated by the model trained on adjacent slice subtraction (6.56, 5.87, and 11.5 cm) than by the model trained on subtraction of low- and high-IR images (4.64, 5.44, and 11.3 cm). In cCTA images, the IR subtraction model images retained anatomic structures from input images but resulted in undesirable salt-and-pepper noise texture and CT number bias. The model trained on adjacent slice subtraction images had more natural texture and no significant bias, but the model sometimes removed small anatomic structures. The model trained on the mixed training dataset preserved both noise texture and anatomy from the model inputs and enabled visualization of small structures seen in PCD-CT images that were previously unresolved by EID-CT.

CONCLUSIONS

The noise texture and anatomical accuracy in CT images denoised with an image-based supervised CNN are greatly influenced by the characteristics and partitioning of training data. With higher-resolution reconstructions and noise texture-preserving deep learning denoising, the quality of cCTA images from EID-CT can be enhanced to enable resolvability of subtle anatomy similar to PCD-CT.

摘要

背景

为了在能量积分探测器CT(EID-CT)上利用高空间分辨率重建进行心脏成像,使其噪声与光子计数探测器(PCD)-CT上的类似重建相当,需要控制EID-CT图像噪声的方法。监督式卷积神经网络(CNN)已显示出在去噪方面的潜力,但在无法使用专用软件或专有信息的情况下,高效创建高质量且无偏差的噪声估计仍然是一个挑战,以便在CNN去噪的CT图像中保留自然噪声纹理。

目的

本研究旨在开发和测试可用于训练CNN模型的基于图像的噪声估计方法,并评估使用高分辨率内核重建的EID-CT冠状动脉CT血管造影(cCTA)图像的去噪性能和噪声纹理保留情况。

方法

训练U-net CNN模型进行去噪。为了监督训练,使用两种不同方法直接从高分辨率内核(Bv59)重建的EID-CT(HR EID-CT)患者图像中估计仅含噪声的图像:减去低强度和高强度迭代重建(IR);减去具有相同IR强度的相邻图像切片。这些方法的噪声估计包含不同的噪声纹理和解剖信息。针对三个数据集分别训练和验证网络:来自两种噪声估计方法中每种方法的训练数据,以及两种方法之间按50%-50%划分的训练数据。将训练好的模型应用于两组测试数据:均匀水体模的CT图像以测量噪声功率谱(NPS),以及七名患者cCTA HR EID-CT检查的独立队列。将去噪后的患者图像与标准分辨率EID-CT重建(Bv40)进行比较。作为低噪声参考,使用同一天在PCD-CT上采集并使用与HR EID-CT类似内核重建的患者图像进行比较。

结果

用每种噪声图像估计方法训练的模型将HR EID-CT图像去噪74%-79%,以达到与HR PCD-CT图像相当的噪声幅度。输入图像(6.08、6.24和12.0 cm)的NPS的峰值、平均值和10%峰值频率,与基于减去低强度和高强度IR图像训练的模型(4.64、5.44和11.3 cm)相比,基于相邻切片相减训练的模型(6.56、5.87和11.5 cm)能更好地逼近。在cCTA图像中,IR相减模型图像保留了输入图像的解剖结构,但产生了不良的椒盐噪声纹理和CT值偏差。基于相邻切片相减图像训练的模型具有更自然的纹理且无明显偏差,但该模型有时会去除小的解剖结构。基于混合训练数据集训练的模型保留了模型输入中的噪声纹理和解剖结构,并能够可视化PCD-CT图像中以前EID-CT无法分辨的小结构。

结论

基于图像的监督式CNN去噪的CT图像中的噪声纹理和解剖准确性受训练数据的特征和划分的极大影响。通过更高分辨率的重建和保留噪声纹理的深度学习去噪,可以提高EID-CT的cCTA图像质量,以实现与PCD-CT类似的细微解剖结构的可分辨性。

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验