Suppr超能文献

使用深度卷积神经网络对超声图像中的乳腺病变进行分类:迁移学习与自动架构设计。

Classification of breast lesions in ultrasound images using deep convolutional neural networks: transfer learning versus automatic architecture design.

机构信息

School of Computing and Engineering, University of Derby, Derby, DE22 3AW, UK.

Department of Ultrasound, Shuguang Hospital affiliated to Shanghai University of Traditional Chinese Medicine, Shanghai, China.

出版信息

Med Biol Eng Comput. 2024 Jan;62(1):135-149. doi: 10.1007/s11517-023-02922-y. Epub 2023 Sep 22.

Abstract

Deep convolutional neural networks (DCNNs) have demonstrated promising performance in classifying breast lesions in 2D ultrasound (US) images. Exiting approaches typically use pre-trained models based on architectures designed for natural images with transfer learning. Fewer attempts have been made to design customized architectures specifically for this purpose. This paper presents a comprehensive evaluation on transfer learning based solutions and automatically designed networks, analyzing the accuracy and robustness of different recognition models in three folds. First, we develop six different DCNN models (BNet, GNet, SqNet, DsNet, RsNet, IncReNet) based on transfer learning. Second, we adapt the Bayesian optimization method to optimize a CNN network (BONet) for classifying breast lesions. A retrospective dataset of 3034 US images collected from various hospitals is then used for evaluation. Extensive tests show that the BONet outperforms other models, exhibiting higher accuracy (83.33%), lower generalization gap (1.85%), shorter training time (66 min), and less model complexity (approximately 0.5 million weight parameters). We also compare the diagnostic performance of all models against that by three experienced radiologists. Finally, we explore the use of saliency maps to explain the classification decisions made by different models. Our investigation shows that saliency maps can assist in comprehending the classification decisions.

摘要

深度卷积神经网络(DCNN)在二维超声(US)图像中对乳腺病变的分类表现出了很有前景的性能。现有的方法通常使用基于自然图像架构的预训练模型进行迁移学习。针对这一目的,设计定制架构的尝试较少。本文对基于迁移学习的解决方案和自动设计的网络进行了全面评估,从三个方面分析了不同识别模型的准确性和鲁棒性。首先,我们基于迁移学习开发了六个不同的 DCNN 模型(BNet、GNet、SqNet、DsNet、RsNet、IncReNet)。其次,我们采用贝叶斯优化方法来优化用于分类乳腺病变的 CNN 网络(BONet)。然后,我们使用来自不同医院的 3034 个 US 图像的回顾性数据集进行评估。大量测试表明,BONet 优于其他模型,表现出更高的准确性(83.33%)、更低的泛化差距(1.85%)、更短的训练时间(66 分钟)和更少的模型复杂度(约 50 万个权重参数)。我们还比较了所有模型与三位经验丰富的放射科医生的诊断性能。最后,我们探索了使用显着性图来解释不同模型的分类决策。我们的研究表明,显着性图可以帮助理解分类决策。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/86db/10758370/1beb4ed51b13/11517_2023_2922_Fig1_HTML.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验