Department of Medical Engineering, Shaoxing Hospital of Traditional Chinese Medicine, Shaoxing, Zhejiang, People's Republic of China.
Department of Radiology, The First Affiliated Hospital of Ningbo University, Ningbo, Zhejiang, People's Republic of China.
BMC Med Imaging. 2024 Jun 5;24(1):133. doi: 10.1186/s12880-024-01307-3.
Breast cancer is the most common cancer among women, and ultrasound is a usual tool for early screening. Nowadays, deep learning technique is applied as an auxiliary tool to provide the predictive results for doctors to decide whether to make further examinations or treatments. This study aimed to develop a hybrid learning approach for breast ultrasound classification by extracting more potential features from local and multi-center ultrasound data.
We proposed a hybrid learning approach to classify the breast tumors into benign and malignant. Three multi-center datasets (BUSI, BUS, OASBUD) were used to pretrain a model by federated learning, then every dataset was fine-tuned at local. The proposed model consisted of a convolutional neural network (CNN) and a graph neural network (GNN), aiming to extract features from images at a spatial level and from graphs at a geometric level. The input images are small-sized and free from pixel-level labels, and the input graphs are generated automatically in an unsupervised manner, which saves the costs of labor and memory space.
The classification AUC of our proposed method is 0.911, 0.871 and 0.767 for BUSI, BUS and OASBUD. The balanced accuracy is 87.6%, 85.2% and 61.4% respectively. The results show that our method outperforms conventional methods.
Our hybrid approach can learn the inter-feature among multi-center data and the intra-feature of local data. It shows potential in aiding doctors for breast tumor classification in ultrasound at an early stage.
乳腺癌是女性最常见的癌症,超声是早期筛查的常用工具。如今,深度学习技术被应用为辅助工具,为医生提供预测结果,以决定是否进行进一步的检查或治疗。本研究旨在通过从局部和多中心超声数据中提取更多潜在特征,开发一种用于乳腺超声分类的混合学习方法。
我们提出了一种混合学习方法,通过联邦学习对乳腺肿瘤进行良性和恶性分类。使用三个多中心数据集(BUSI、BUS、OASBUD)对模型进行预训练,然后在本地对每个数据集进行微调。所提出的模型由卷积神经网络(CNN)和图神经网络(GNN)组成,旨在从图像的空间水平和图的几何水平提取特征。输入图像是小尺寸的,没有像素级别的标签,输入图是自动以无监督的方式生成的,节省了劳动力和内存空间成本。
我们提出的方法在 BUSI、BUS 和 OASBUD 上的分类 AUC 分别为 0.911、0.871 和 0.767。平衡准确率分别为 87.6%、85.2%和 61.4%。结果表明,我们的方法优于传统方法。
我们的混合方法可以学习多中心数据之间的特征间相互关系和本地数据的特征内关系。它在早期辅助医生进行超声乳腺肿瘤分类方面具有潜力。