Suppr超能文献

超声医学成像中波束形成域迁移的对抗学习

Adversarial learning for beamforming domain transfer in ultrasound medical imaging.

作者信息

Seoni Silvia, Salvi Massimo, Matrone Giulia, Lapia Francesco, Busso Chiara, Minetto Marco A, Meiburger Kristen M

机构信息

Biolab, PoliTo(BIO)Med Lab, Department of Electronics and Telecommunications, Politecnico di Torino, Corso Duca degli Abruzzi 24, 10129 Turin, Italy.

Department of Electrical, Computer and Biomedical Engineering, University of Pavia, via Ferrata 5, 27100 Pavia, Italy.

出版信息

Ultrasonics. 2025 Dec;156:107749. doi: 10.1016/j.ultras.2025.107749. Epub 2025 Jul 9.

Abstract

Beamforming, the process of reconstructing B-mode images from raw radiofrequency (RF) data, significantly influences ultrasound image quality. While advanced beamforming methods aim to enhance the traditional Delay and Sum (DAS) technique, they require access to raw RF data, which is often unavailable to researchers when using clinical ultrasound scanners. Given that Filtered Delay Multiply and Sum (F-DMAS) is known to provide superior image quality compared to conventional DAS, this study introduces the idea of employing generative adversarial networks (GANs) that transform plane wave DAS images into ones resembling those produced by F-DMAS. We validated the adversarial approach employing three different architectures (traditional Pix2Pix, Pyramidal Pix2Pix and CycleGAN) using full-reference metrics: Root Mean Square Error (RMSE) and Peak Signal-to-Noise Ratio (PSNR). We further propose employing a texture analysis to validate consistency between the generated images and target images, using 27 first-order and second-order parameters; contrast enhancement was evaluated using the Contrast Improvement Index (CII), and clinical relevance was determined through expert qualitative evaluation. The adversarial methods were also compared with traditional image enhancement methods, such as contrast limited adaptive histogram equalization (CLAHE) and histogram matching. The image similarity metrics between all methods were comparable, with the Pyramidal Pix2Pix GAN method showing the best values compared to traditional techniques and other generative models (PSNR = 18.0 ± 0.6 dB, RMSE = 0.126 ± 0.008). The texture features proved to be a clear discriminant between traditional methods and generative models, with values much closer to the target F-DMAS image for the generative models. All employed methods showed an improved contrast over original PW DAS images. A clinical evaluation was then employed to assess the contribution of the generated images compared to the original ones and to distinguish which generative model provided the best qualitative images. The proposed generative adversarial approach proves to be a viable option for enhancing B-mode ultrasound images when there is no access to raw RF data and demonstrates how texture features can be employed to validate deep learning generative models.

摘要

波束形成是从原始射频(RF)数据重建B模式图像的过程,对超声图像质量有重大影响。虽然先进的波束形成方法旨在改进传统的延迟求和(DAS)技术,但它们需要获取原始RF数据,而研究人员在使用临床超声扫描仪时通常无法获得该数据。鉴于已知滤波延迟相乘求和(F-DMAS)与传统DAS相比能提供更高的图像质量,本研究引入了利用生成对抗网络(GAN)将平面波DAS图像转换为类似F-DMAS所产生图像的想法。我们使用全参考指标:均方根误差(RMSE)和峰值信噪比(PSNR),对采用三种不同架构(传统Pix2Pix、金字塔形Pix2Pix和CycleGAN)的对抗方法进行了验证。我们还进一步提出采用纹理分析来验证生成图像与目标图像之间的一致性,使用27个一阶和二阶参数;使用对比度改善指数(CII)评估对比度增强,并通过专家定性评估确定临床相关性。还将对抗方法与传统图像增强方法进行了比较,如对比度受限自适应直方图均衡化(CLAHE)和直方图匹配。所有方法之间的图像相似性指标具有可比性,与传统技术和其他生成模型相比,金字塔形Pix2Pix GAN方法显示出最佳值(PSNR = 18.0 ± 0.6 dB,RMSE = 0.126 ± 0.008)。纹理特征被证明是传统方法和生成模型之间的明显判别指标,生成模型的值更接近目标F-DMAS图像。所有采用的方法与原始平面波DAS图像相比,对比度都有所提高。然后进行了临床评估,以评估生成图像相对于原始图像的贡献,并区分哪种生成模型提供了质量最佳的图像。当无法获取原始RF数据时,所提出的生成对抗方法被证明是增强B模式超声图像的可行选择,并展示了如何利用纹理特征来验证深度学习生成模型。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验