Suppr超能文献

2S-BUSGAN:一种基于小数据集的具有真实乳房超声图像和对应肿瘤轮廓的新型生成对抗网络。

2S-BUSGAN: A Novel Generative Adversarial Network for Realistic Breast Ultrasound Image with Corresponding Tumor Contour Based on Small Datasets.

机构信息

College of Biomedical Engineering, Sichuan University, Chengdu 610065, China.

Department of Ultrasound, West China Hospital, Sichuan University, Chengdu 610065, China.

出版信息

Sensors (Basel). 2023 Oct 20;23(20):8614. doi: 10.3390/s23208614.

Abstract

Deep learning (DL) models in breast ultrasound (BUS) image analysis face challenges with data imbalance and limited atypical tumor samples. Generative Adversarial Networks (GAN) address these challenges by providing efficient data augmentation for small datasets. However, current GAN approaches fail to capture the structural features of BUS and generated images lack structural legitimacy and are unrealistic. Furthermore, generated images require manual annotation for different downstream tasks before they can be used. Therefore, we propose a two-stage GAN framework, 2s-BUSGAN, for generating annotated BUS images. It consists of the Mask Generation Stage (MGS) and the Image Generation Stage (IGS), generating benign and malignant BUS images using corresponding tumor contours. Moreover, we employ a Feature-Matching Loss (FML) to enhance the quality of generated images and utilize a Differential Augmentation Module (DAM) to improve GAN performance on small datasets. We conduct experiments on two datasets, BUSI and Collected. Moreover, results indicate that the quality of generated images is improved compared with traditional GAN methods. Additionally, our generated images underwent evaluation by ultrasound experts, demonstrating the possibility of deceiving doctors. A comparative evaluation showed that our method also outperforms traditional GAN methods when applied to training segmentation and classification models. Our method achieved a classification accuracy of 69% and 85.7% on two datasets, respectively, which is about 3% and 2% higher than that of the traditional augmentation model. The segmentation model trained using the 2s-BUSGAN augmented datasets achieved DICE scores of 75% and 73% on the two datasets, respectively, which were higher than the traditional augmentation methods. Our research tackles imbalanced and limited BUS image data challenges. Our 2s-BUSGAN augmentation method holds potential for enhancing deep learning model performance in the field.

摘要

深度学习(DL)模型在乳腺超声(BUS)图像分析中面临数据不平衡和典型肿瘤样本有限的挑战。生成对抗网络(GAN)通过为小数据集提供有效的数据扩充来解决这些挑战。然而,目前的 GAN 方法无法捕捉 BUS 的结构特征,生成的图像缺乏结构合法性且不真实。此外,生成的图像在用于不同的下游任务之前需要进行手动注释。因此,我们提出了一种两阶段 GAN 框架 2s-BUSGAN,用于生成带注释的 BUS 图像。它由掩模生成阶段(MGS)和图像生成阶段(IGS)组成,使用相应的肿瘤轮廓生成良性和恶性 BUS 图像。此外,我们采用特征匹配损失(FML)来提高生成图像的质量,并利用差分增强模块(DAM)来提高 GAN 在小数据集上的性能。我们在两个数据集 BUSI 和 Collected 上进行了实验。结果表明,与传统 GAN 方法相比,生成图像的质量得到了提高。此外,我们的生成图像还经过了超声专家的评估,表明有欺骗医生的可能性。对比评估表明,我们的方法应用于训练分割和分类模型时也优于传统 GAN 方法。我们的方法在两个数据集上的分类准确率分别为 69%和 85.7%,比传统的增强模型分别高约 3%和 2%。使用 2s-BUSGAN 增强数据集训练的分割模型在两个数据集上的 DICE 得分分别为 75%和 73%,高于传统的增强方法。我们的研究解决了不平衡和有限的 BUS 图像数据挑战。我们的 2s-BUSGAN 增强方法有望提高深度学习模型在该领域的性能。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/bbce/10610581/473d62eb858b/sensors-23-08614-g001.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验