Rai Hari Mohan, Yoo Joon, Agarwal Saurabh, Agarwal Neha
School of Computing, Gachon University, Seongnam 13120, Republic of Korea.
Department of Information and Communication Engineering, Yeungnam University, Gyeongsan 38541, Republic of Korea.
Bioengineering (Basel). 2025 Jan 15;12(1):73. doi: 10.3390/bioengineering12010073.
Breast cancer ranks as the second most prevalent cancer globally and is the most frequently diagnosed cancer among women; therefore, early, automated, and precise detection is essential. Most AI-based techniques for breast cancer detection are complex and have high computational costs. Hence, to overcome this challenge, we have presented the innovative LightweightUNet hybrid deep learning (DL) classifier for the accurate classification of breast cancer. The proposed model boasts a low computational cost due to its smaller number of layers in its architecture, and its adaptive nature stems from its use of depth-wise separable convolution. We have employed a multimodal approach to validate the model's performance, using 13,000 images from two distinct modalities: mammogram imaging (MGI) and ultrasound imaging (USI). We collected the multimodal imaging datasets from seven different sources, including the benchmark datasets DDSM, MIAS, INbreast, BrEaST, BUSI, Thammasat, and HMSS. Since the datasets are from various sources, we have resized them to the uniform size of 256 × 256 pixels and normalized them using the Box-Cox transformation technique. Since the USI dataset is smaller, we have applied the StyleGAN3 model to generate 10,000 synthetic ultrasound images. In this work, we have performed two separate experiments: the first on a real dataset without augmentation and the second on a real + GAN-augmented dataset using our proposed method. During the experiments, we used a 5-fold cross-validation method, and our proposed model obtained good results on the real dataset (87.16% precision, 86.87% recall, 86.84% F1-score, and 86.87% accuracy) without adding any extra data. Similarly, the second experiment provides better performance on the real + GAN-augmented dataset (96.36% precision, 96.35% recall, 96.35% F1-score, and 96.35% accuracy). This multimodal approach, which utilizes LightweightUNet, enhances the performance by 9.20% in precision, 9.48% in recall, 9.51% in F1-score, and a 9.48% increase in accuracy on the combined dataset. The LightweightUNet model we proposed works very well thanks to a creative network design, adding fake images to the data, and a multimodal training method. These results show that the model has a lot of potential for use in clinical settings.
乳腺癌是全球第二大常见癌症,也是女性中最常被诊断出的癌症;因此,早期、自动化且精确的检测至关重要。大多数基于人工智能的乳腺癌检测技术都很复杂,计算成本高昂。因此,为了克服这一挑战,我们提出了创新的轻量级UNet混合深度学习(DL)分类器,用于乳腺癌的准确分类。所提出的模型由于其架构中层数较少,计算成本较低,并且其自适应特性源于其使用的深度可分离卷积。我们采用了多模态方法来验证模型的性能,使用了来自两种不同模态的13000张图像:乳房X光成像(MGI)和超声成像(USI)。我们从七个不同来源收集了多模态成像数据集,包括基准数据集DDSM、MIAS、INbreast、BrEaST、BUSI、Thammasat和HMSS。由于数据集来自不同来源,我们将它们调整为统一的256×256像素大小,并使用Box-Cox变换技术进行归一化。由于USI数据集较小,我们应用了StyleGAN3模型生成10000张合成超声图像。在这项工作中,我们进行了两个单独的实验:第一个实验在未经增强的真实数据集上进行,第二个实验在使用我们提出的方法的真实+GAN增强数据集上进行。在实验过程中,我们使用了5折交叉验证方法,我们提出的模型在真实数据集上取得了良好的结果(精确率87.16%、召回率86.87%、F1分数86.84%、准确率86.87%),且未添加任何额外数据。同样,第二个实验在真实+GAN增强数据集上提供了更好的性能(精确率96.36%、召回率96.35%、F1分数96.35%、准确率96.35%)。这种利用轻量级UNet的多模态方法,在组合数据集上的精确率提高了9.20%、召回率提高了9.48%、F1分数提高了9.51%、准确率提高了9.48%。我们提出的轻量级UNet模型由于其创新的网络设计、向数据中添加虚假图像以及多模态训练方法,工作得非常出色。这些结果表明该模型在临床环境中有很大的应用潜力。