Hu Can, Bian Congchao, Cao Ning, Zhou Han, Guo Bin
School of Computer and Soft, Hohai University, Nanjing 211100, China.
School of Electronic Science and Engineering, Nanjing University, Nanjing 210046, China.
Bioengineering (Basel). 2024 Aug 8;11(8):805. doi: 10.3390/bioengineering11080805.
Diffusion-weighted imaging (DWI), a pivotal component of multiparametric magnetic resonance imaging (mpMRI), plays a pivotal role in the detection, diagnosis, and evaluation of gastric cancer. Despite its potential, DWI is often marred by substantial anatomical distortions and sensitivity artifacts, which can hinder its practical utility. Presently, enhancing DWI's image quality necessitates reliance on cutting-edge hardware and extended scanning durations. The development of a rapid technique that optimally balances shortened acquisition time with improved image quality would have substantial clinical relevance.
This study aims to construct and evaluate the unsupervised learning framework called attention dual contrast vision transformer cyclegan (ADCVCGAN) for enhancing image quality and reducing scanning time in gastric DWI.
The ADCVCGAN framework, proposed in this study, employs high b-value DWI (b = 1200 s/mm2) as a reference for generating synthetic b-value DWI (s-DWI) from acquired lower b-value DWI (a-DWI, b = 800 s/mm2). Specifically, ADCVCGAN incorporates an attention mechanism CBAM module into the CycleGAN generator to enhance feature extraction from the input a-DWI in both the channel and spatial dimensions. Subsequently, a vision transformer module, based on the U-net framework, is introduced to refine detailed features, aiming to produce s-DWI with image quality comparable to that of b-DWI. Finally, images from the source domain are added as negative samples to the discriminator, encouraging the discriminator to steer the generator towards synthesizing images distant from the source domain in the latent space, with the goal of generating more realistic s-DWI. The image quality of the s-DWI is quantitatively assessed using metrics such as the peak signal-to-noise ratio (PSNR), structural similarity index (SSIM), feature similarity index (FSIM), mean squared error (MSE), weighted peak signal-to-noise ratio (WPSNR), and weighted mean squared error (WMSE). Subjective evaluations of different DWI images were conducted using the Wilcoxon signed-rank test. The reproducibility and consistency of b-ADC and s-ADC, calculated from b-DWI and s-DWI, respectively, were assessed using the intraclass correlation coefficient (ICC). A statistical significance level of < 0.05 was considered.
The s-DWI generated by the unsupervised learning framework ADCVCGAN scored significantly higher than a-DWI in quantitative metrics such as PSNR, SSIM, FSIM, MSE, WPSNR, and WMSE, with statistical significance ( < 0.001). This performance is comparable to the optimal level achieved by the latest synthetic algorithms. Subjective scores for lesion visibility, image anatomical details, image distortion, and overall image quality were significantly higher for s-DWI and b-DWI compared to a-DWI ( < 0.001). At the same time, there was no significant difference between the scores of s-DWI and b-DWI ( > 0.05). The consistency of b-ADC and s-ADC readings was comparable among different readers (ICC: b-ADC 0.87-0.90; s-ADC 0.88-0.89, respectively). The repeatability of b-ADC and s-ADC readings by the same reader was also comparable (Reader1 ICC: b-ADC 0.85-0.86, s-ADC 0.85-0.93; Reader2 ICC: b-ADC 0.86-0.87, s-ADC 0.89-0.92, respectively).
ADCVCGAN shows excellent promise in generating gastric cancer DWI images. It effectively reduces scanning time, improves image quality, and ensures the authenticity of s-DWI images and their s-ADC values, thus providing a basis for assisting clinical decision making.
扩散加权成像(DWI)是多参数磁共振成像(mpMRI)的关键组成部分,在胃癌的检测、诊断和评估中发挥着关键作用。尽管DWI具有潜力,但它常常受到严重的解剖结构变形和敏感性伪影的影响,这可能会阻碍其实际应用。目前,提高DWI的图像质量需要依赖先进的硬件和延长扫描时间。开发一种能够在缩短采集时间和提高图像质量之间实现最佳平衡的快速技术将具有重大的临床意义。
本研究旨在构建并评估一种名为注意力双对比视觉Transformer循环生成对抗网络(ADCVCGAN)的无监督学习框架,以提高胃癌DWI的图像质量并减少扫描时间。
本研究提出的ADCVCGAN框架采用高b值DWI(b = 1200 s/mm²)作为参考,从采集到的低b值DWI(a-DWI,b = 800 s/mm²)生成合成b值DWI(s-DWI)。具体而言,ADCVCGAN将注意力机制CBAM模块融入CycleGAN生成器中,以增强在通道和空间维度上从输入的a-DWI中提取特征的能力。随后,引入基于U-net框架的视觉Transformer模块来细化详细特征,旨在生成图像质量与b-DWI相当的s-DWI。最后,将来自源域的图像作为负样本添加到判别器中,促使判别器引导生成器在潜在空间中合成远离源域的图像,目标是生成更逼真的s-DWI。使用峰值信噪比(PSNR)、结构相似性指数(SSIM)、特征相似性指数(FSIM)、均方误差(MSE)、加权峰值信噪比(WPSNR)和加权均方误差(WMSE)等指标对s-DWI的图像质量进行定量评估。使用Wilcoxon符号秩检验对不同DWI图像进行主观评估。分别从b-DWI和s-DWI计算得到的b-ADC和s-ADC的重复性和一致性,使用组内相关系数(ICC)进行评估。认为统计学显著性水平<0.05。
无监督学习框架ADCVCGAN生成的s-DWI在PSNR、SSIM、FSIM、MSE、WPSNR和WMSE等定量指标上的得分显著高于a-DWI,具有统计学显著性(<0.001)。这一性能与最新合成算法达到的最佳水平相当。与a-DWI相比,s-DWI和b-DWI在病变可见性、图像解剖细节、图像变形和整体图像质量方面的主观评分显著更高(<0.001)。同时,s-DWI和b-DWI的评分之间没有显著差异(>0.05)。不同读者之间b-ADC和s-ADC读数的一致性相当(ICC:b-ADC分别为0.87 - 0.90;s-ADC分别为0.88 - 0.89)。同一读者对b-ADC和s-ADC读数的重复性也相当(读者1的ICC:b-ADC为0.85 - 0.86,s-ADC为- 0.85 - 0.93;读者2的ICC:b-ADC为0.86 - 0.87,s-ADC为0.89 - 0.92)。
ADCVCGAN在生成胃癌DWI图像方面显示出极好的前景。它有效地减少了扫描时间,提高了图像质量,并确保了s-DWI图像及其s-ADC值的真实性,从而为辅助临床决策提供了依据。