Suppr超能文献

跨疾病领域提升超声图像质量:循环一致生成对抗网络与感知损失的应用

Enhancing Ultrasound Image Quality Across Disease Domains: Application of Cycle-Consistent Generative Adversarial Network and Perceptual Loss.

作者信息

Athreya Shreeram, Radhachandran Ashwath, Ivezić Vedrana, Sant Vivek R, Arnold Corey W, Speier William

机构信息

Department of Electrical and Computer Engineering, University of California Los Angeles, Los Angeles, CA, United States.

Department of Bioengineering, University of California Los Angeles, Los Angeles, CA, United States.

出版信息

JMIR Biomed Eng. 2024 Dec 17;9:e58911. doi: 10.2196/58911.

Abstract

BACKGROUND

Numerous studies have explored image processing techniques aimed at enhancing ultrasound images to narrow the performance gap between low-quality portable devices and high-end ultrasound equipment. These investigations often use registered image pairs created by modifying the same image through methods like down sampling or adding noise, rather than using separate images from different machines. Additionally, they rely on organ-specific features, limiting the models' generalizability across various imaging conditions and devices. The challenge remains to develop a universal framework capable of improving image quality across different devices and conditions, independent of registration or specific organ characteristics.

OBJECTIVE

This study aims to develop a robust framework that enhances the quality of ultrasound images, particularly those captured with compact, portable devices, which are often constrained by low quality due to hardware limitations. The framework is designed to effectively process nonregistered ultrasound image pairs, a common challenge in medical imaging, across various clinical settings and device types. By addressing these challenges, the research seeks to provide a more generalized and adaptable solution that can be widely applied across diverse medical scenarios, improving the accessibility and quality of diagnostic imaging.

METHODS

A retrospective analysis was conducted by using a cycle-consistent generative adversarial network (CycleGAN) framework enhanced with perceptual loss to improve the quality of ultrasound images, focusing on nonregistered image pairs from various organ systems. The perceptual loss was integrated to preserve anatomical integrity by comparing deep features extracted from pretrained neural networks. The model's performance was evaluated against corresponding high-resolution images, ensuring that the enhanced outputs closely mimic those from high-end ultrasound devices. The model was trained and validated using a publicly available, diverse dataset to ensure robustness and generalizability across different imaging scenarios.

RESULTS

The advanced CycleGAN framework, enhanced with perceptual loss, significantly outperformed the previous state-of-the-art, stable CycleGAN, in multiple evaluation metrics. Specifically, our method achieved a structural similarity index of 0.2889 versus 0.2502 (P<.001), a peak signal-to-noise ratio of 15.8935 versus 14.9430 (P<.001), and a learned perceptual image patch similarity score of 0.4490 versus 0.5005 (P<.001). These results demonstrate the model's superior ability to enhance image quality while preserving critical anatomical details, thereby improving diagnostic usefulness.

CONCLUSIONS

This study presents a significant advancement in ultrasound imaging by leveraging a CycleGAN model enhanced with perceptual loss to bridge the quality gap between images from different devices. By processing nonregistered image pairs, the model not only enhances visual quality but also ensures the preservation of essential anatomical structures, crucial for accurate diagnosis. This approach holds the potential to democratize high-quality ultrasound imaging, making it accessible through low-cost portable devices, thereby improving health care outcomes, particularly in resource-limited settings. Future research will focus on further validation and optimization for clinical use.

摘要

背景

众多研究探索了图像处理技术,旨在增强超声图像,以缩小低质量便携式设备与高端超声设备之间的性能差距。这些研究通常使用通过下采样或添加噪声等方法对同一图像进行修改而创建的配准图像对,而非使用来自不同机器的单独图像。此外,它们依赖于特定器官的特征,限制了模型在各种成像条件和设备上的通用性。目前的挑战仍然是开发一个通用框架,能够在不同设备和条件下提高图像质量,而无需依赖配准或特定器官特征。

目的

本研究旨在开发一个强大的框架,以提高超声图像的质量,特别是那些由紧凑的便携式设备采集的图像,这些设备往往由于硬件限制而质量较低。该框架旨在有效处理非配准的超声图像对,这是医学成像中的一个常见挑战,适用于各种临床环境和设备类型。通过应对这些挑战,该研究旨在提供一种更通用、更具适应性的解决方案,可广泛应用于各种医疗场景,提高诊断成像的可及性和质量。

方法

使用一个通过感知损失增强的循环一致生成对抗网络(CycleGAN)框架进行回顾性分析,以提高超声图像质量,重点关注来自各种器官系统的非配准图像对。通过比较从预训练神经网络中提取的深度特征来整合感知损失,以保持解剖结构的完整性。根据相应的高分辨率图像评估模型的性能,确保增强后的输出紧密模仿高端超声设备的输出。使用公开可用的多样化数据集对模型进行训练和验证,以确保其在不同成像场景下的鲁棒性和通用性。

结果

在多个评估指标中,通过感知损失增强的先进CycleGAN框架显著优于先前的最先进稳定CycleGAN。具体而言,我们的方法的结构相似性指数为0.2889,而之前为0.2502(P<0.001);峰值信噪比为15.8935,而之前为14.9430(P<0.001);学习感知图像块相似性分数为0.4490,而之前为0.5005(P<0.001)。这些结果表明,该模型在提高图像质量的同时保留关键解剖细节方面具有卓越能力,从而提高了诊断实用性。

结论

本研究通过利用一个通过感知损失增强的CycleGAN模型,在超声成像方面取得了重大进展,以弥合不同设备图像之间的质量差距。通过处理非配准图像对,该模型不仅提高了视觉质量,还确保了重要解剖结构的保留,这对于准确诊断至关重要。这种方法有可能使高质量超声成像民主化,通过低成本便携式设备即可实现,从而改善医疗保健结果,特别是在资源有限的环境中。未来的研究将集中于进一步验证和优化以供临床使用。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c793/11688586/1c8056be44f4/biomedeng_v9i1e58911_fig1.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验