Suppr超能文献

基于卷积神经网络的跨模态融合用于增强乳腺钼靶摄影和超声检查对乳腺癌的检测

CNN-Based Cross-Modality Fusion for Enhanced Breast Cancer Detection Using Mammography and Ultrasound.

作者信息

Wang Yi-Ming, Wang Chi-Yuan, Liu Kuo-Ying, Huang Yung-Hui, Chen Tai-Been, Chiu Kon-Ning, Liang Chih-Yu, Lu Nan-Han

机构信息

Department of Critical Care Medicine, E-DA Hospital, I-Shou University, Kaohsiung City 824005, Taiwan.

Department of Medical Imaging and Radiological Science, I-Shou University, Kaohsiung City 824005, Taiwan.

出版信息

Tomography. 2024 Dec 12;10(12):2038-2057. doi: 10.3390/tomography10120145.

Abstract

Breast cancer is a leading cause of mortality among women in Taiwan and globally. Non-invasive imaging methods, such as mammography and ultrasound, are critical for early detection, yet standalone modalities have limitations in regard to their diagnostic accuracy. This study aims to enhance breast cancer detection through a cross-modality fusion approach combining mammography and ultrasound imaging, using advanced convolutional neural network (CNN) architectures. Breast images were sourced from public datasets, including the RSNA, the PAS, and Kaggle, and categorized into malignant and benign groups. Data augmentation techniques were used to address imbalances in the ultrasound dataset. Three models were developed: (1) pre-trained CNNs integrated with machine learning classifiers, (2) transfer learning-based CNNs, and (3) a custom-designed 17-layer CNN for direct classification. The performance of the models was evaluated using metrics such as accuracy and the Kappa score. The custom 17-layer CNN outperformed the other models, achieving an accuracy of 0.964 and a Kappa score of 0.927. The transfer learning model achieved moderate performance (accuracy 0.846, Kappa 0.694), while the pre-trained CNNs with machine learning classifiers yielded the lowest results (accuracy 0.780, Kappa 0.559). Cross-modality fusion proved effective in leveraging the complementary strengths of mammography and ultrasound imaging. This study demonstrates the potential of cross-modality imaging and tailored CNN architectures to significantly improve diagnostic accuracy and reliability in breast cancer detection. The custom-designed model offers a practical solution for early detection, potentially reducing false positives and false negatives, and improving patient outcomes through timely and accurate diagnosis.

摘要

乳腺癌是台湾地区乃至全球女性死亡的主要原因之一。乳腺X线摄影和超声等非侵入性成像方法对早期检测至关重要,但单一模态在诊断准确性方面存在局限性。本研究旨在通过结合乳腺X线摄影和超声成像的跨模态融合方法,利用先进的卷积神经网络(CNN)架构来提高乳腺癌检测能力。乳腺图像来自公共数据集,包括RSNA、PAS和Kaggle,并分为恶性和良性组。采用数据增强技术来解决超声数据集中的不平衡问题。开发了三种模型:(1)与机器学习分类器集成的预训练CNN;(2)基于迁移学习的CNN;(3)用于直接分类的定制设计的17层CNN。使用准确率和Kappa分数等指标评估模型的性能。定制的17层CNN优于其他模型,准确率达到0.964,Kappa分数为0.927。迁移学习模型表现中等(准确率0.846,Kappa 0.694),而带有机器学习分类器的预训练CNN结果最低(准确率0.780,Kappa 0.559)。跨模态融合在利用乳腺X线摄影和超声成像的互补优势方面被证明是有效的。本研究证明了跨模态成像和定制CNN架构在显著提高乳腺癌检测的诊断准确性和可靠性方面的潜力。定制设计的模型为早期检测提供了一个实用的解决方案,有可能减少假阳性和假阴性,并通过及时准确的诊断改善患者的治疗结果。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/abc1/11679931/d0ee5f3b9141/tomography-10-00145-g001.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验