• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

结合分割引导损失和极坐标训练提高腹部超声模拟的逼真度。

Improving realism in abdominal ultrasound simulation combining a segmentation-guided loss and polar coordinates training.

作者信息

Vitale Santiago, Orlando José Ignacio, Iarussi Emmanuel, Díaz Alejandro, Larrabide Ignacio

机构信息

National Scientific and Technical Research Council (CONICET), Buenos Aires, Argentina.

Pladema Institute, UNICEN, Tandil, Buenos Aires, Argentina.

出版信息

Med Phys. 2025 Jun;52(6):4540-4556. doi: 10.1002/mp.17801. Epub 2025 Mar 30.

DOI:10.1002/mp.17801
PMID:40159565
Abstract

BACKGROUND

Ultrasound (US) simulation helps train physicians and medical students in image acquisition and interpretation, enabling safe practice of transducer manipulation and organ identification. Current simulators generate realistic images from reference scans. Although physics-based simulators provide real-time images, they lack sufficient realism, while recent deep learning-based models based on unpaired image-to-image translation improve realism but introduce anatomical inconsistencies.

PURPOSE

We propose a novel framework to reduce hallucinations from generative adversarial networks (GANs) used on physics-based simulations, enhancing anatomical accuracy and realism in abdominal US simulation. Our method aims to produce anatomically consistent images free from artifacts within and outside the field of view (FoV).

METHODS

We introduce a segmentation-guided loss to enforce anatomical consistency by using a pre-trained Unet model that segments abdominal organs from physics-based simulated scans. Penalizing segmentation discrepancies before and after the translation cycle helps prevent unrealistic artifacts. Additionally, we propose training GANs on images in polar coordinates to limit the field of view to non-blank regions. We evaluated our approach on unpaired datasets comprising 617 real abdominal US images from a SonoSite-M turbo v1.3 scanner and 971 artificial scans from a ray-casting simulator. Data was partitioned at the patient level into training (70%), validation (10%), and testing (20%). Performance was quantitatively assessed with Frechet and Kernel Inception Distances (FID and KID), and organ-specific histogram distances, reporting 95% confidence intervals. We compared our model against generative methods such as CUT, UVCGANv2, and UNSB, performing statistical analyses using Wilcoxon tests (FID and KID with Bonferroni-corrected , with ). A perceptual realism study involving expert radiologists was also conducted.

RESULTS

Our method significantly reduced FID and KID by 66% and 89%, respectively, compared to CycleGAN, and by 34% and 59% compared to the leading alternative UVCGANv2 ( ). No significant differences ( ) in echogenicity distributions were found between real and simulated images within liver and gallbladder regions. The user study indicated our simulated scans fooled radiologists in 36.2% of cases, outperforming other methods.

CONCLUSIONS

Our segmentation-guided, polar-coordinates-trained CycleGAN framework significantly reduces hallucinations, ensuring anatomical consistency, and realism in simulated abdominal US images, surpassing existing methods.

摘要

背景

超声(US)模拟有助于培训医生和医学生进行图像采集和解读,使他们能够安全地进行换能器操作和器官识别练习。当前的模拟器通过参考扫描生成逼真的图像。尽管基于物理的模拟器能够提供实时图像,但它们缺乏足够的真实感,而最近基于深度学习的基于无配对图像到图像转换的模型提高了真实感,但却引入了解剖结构不一致的问题。

目的

我们提出了一种新颖的框架,以减少基于物理模拟的生成对抗网络(GAN)产生的幻觉,提高腹部超声模拟中的解剖准确性和真实感。我们的方法旨在生成在视野(FoV)内外均无伪影的解剖结构一致的图像。

方法

我们引入了一种分割引导损失,通过使用预训练的Unet模型来强制解剖结构的一致性,该模型可从基于物理的模拟扫描中分割出腹部器官。对转换循环前后的分割差异进行惩罚有助于防止出现不真实的伪影。此外,我们建议在极坐标图像上训练GAN,将视野限制在非空白区域。我们在由来自SonoSite-M turbo v1.3扫描仪的617张真实腹部超声图像和来自光线投射模拟器的971张人工扫描图像组成的无配对数据集上评估了我们的方法。数据在患者层面被划分为训练集(70%)、验证集(10%)和测试集(20%)。使用弗雷歇距离和核内插距离(FID和KID)以及特定器官的直方图距离对性能进行定量评估,并报告95%置信区间。我们将我们的模型与诸如CUT、UVCGANv2和UNSB等生成方法进行比较,使用威尔科克森检验进行统计分析(FID和KID采用邦费罗尼校正, , )。还进行了一项涉及专家放射科医生的感知真实感研究。

结果

与CycleGAN相比,我们的方法分别显著降低了FID和KID的66%和89%,与领先的替代方法UVCGANv2相比分别降低了34%和59%( )。在肝脏和胆囊区域的真实图像和模拟图像之间,未发现回声性分布存在显著差异( )。用户研究表明,我们的模拟扫描在36.2%的情况下能够骗过放射科医生,优于其他方法。

结论

我们的分割引导、极坐标训练的CycleGAN框架显著减少了幻觉,确保了模拟腹部超声图像中的解剖一致性和真实感,超越了现有方法。

相似文献

1
Improving realism in abdominal ultrasound simulation combining a segmentation-guided loss and polar coordinates training.结合分割引导损失和极坐标训练提高腹部超声模拟的逼真度。
Med Phys. 2025 Jun;52(6):4540-4556. doi: 10.1002/mp.17801. Epub 2025 Mar 30.
2
Improving realism in patient-specific abdominal ultrasound simulation using CycleGANs.利用 CycleGAN 提高腹部超声模拟中的患者特异性真实感。
Int J Comput Assist Radiol Surg. 2020 Feb;15(2):183-192. doi: 10.1007/s11548-019-02046-5. Epub 2019 Aug 7.
3
A novel cross-modal data augmentation method based on contrastive unpaired translation network for kidney segmentation in ultrasound imaging.一种基于对比无配对翻译网络的新型跨模态数据增强方法,用于超声成像中的肾脏分割。
Med Phys. 2025 Jun;52(6):3877-3887. doi: 10.1002/mp.17663. Epub 2025 Feb 4.
4
Semi-supervised abdominal multi-organ segmentation by object-redrawing.通过对象重绘实现半监督腹部多器官分割
Med Phys. 2024 Nov;51(11):8334-8347. doi: 10.1002/mp.17364. Epub 2024 Aug 21.
5
Generative artificial intelligence to produce high-fidelity blastocyst-stage embryo images.生成式人工智能生成高保真囊胚期胚胎图像。
Hum Reprod. 2024 Jun 3;39(6):1197-1207. doi: 10.1093/humrep/deae064.
6
The role of unpaired image-to-image translation for stain color normalization in colorectal cancer histology classification.非配对图像到图像翻译在结直肠癌组织学分类中用于染色颜色归一化的作用。
Comput Methods Programs Biomed. 2023 Jun;234:107511. doi: 10.1016/j.cmpb.2023.107511. Epub 2023 Mar 26.
7
A Method based on Evolutionary Algorithms and Channel Attention Mechanism to Enhance Cycle Generative Adversarial Network Performance for Image Translation.基于进化算法和通道注意力机制的方法来提高用于图像翻译的循环生成对抗网络性能。
Int J Neural Syst. 2023 May;33(5):2350026. doi: 10.1142/S0129065723500260. Epub 2023 Apr 5.
8
SpeckleGAN: a generative adversarial network with an adaptive speckle layer to augment limited training data for ultrasound image processing.SpeckleGAN:一种具有自适应散斑层的生成对抗网络,用于扩充有限的超声图像处理训练数据。
Int J Comput Assist Radiol Surg. 2020 Sep;15(9):1427-1436. doi: 10.1007/s11548-020-02203-1. Epub 2020 Jun 18.
9
Anatomy-aware computed tomography-to-ultrasound spine registration.解剖感知 CT 到超声脊柱配准。
Med Phys. 2024 Mar;51(3):2044-2056. doi: 10.1002/mp.16731. Epub 2023 Sep 14.
10
Enhancing Ultrasound Image Quality Across Disease Domains: Application of Cycle-Consistent Generative Adversarial Network and Perceptual Loss.跨疾病领域提升超声图像质量:循环一致生成对抗网络与感知损失的应用
JMIR Biomed Eng. 2024 Dec 17;9:e58911. doi: 10.2196/58911.