Department of Radiology, Adiyaman Training and Research Hospital, Adiyaman, Turkey.
Department of Biomedical Imaging, Universiti Malaya Research Imaging Centre, Faculty of Medicine, Universiti Malaya, 59100 Kuala Lumpur, Malaysia.
Med Eng Phys. 2022 Oct;108:103895. doi: 10.1016/j.medengphy.2022.103895. Epub 2022 Sep 15.
Ultrasound (US) is an important imaging modality used to assess breast lesions for malignant features. In the past decade, many machine learning models have been developed for automated discrimination of breast cancer versus normal on US images, but few have classified the images based on the Breast Imaging Reporting and Data System (BI-RADS) classes. This work aimed to develop a model for classifying US breast lesions using a BI-RADS classification framework with a new multi-class US image dataset. We proposed a deep model that combined a novel pyramid triple deep feature generator (PTDFG) with transfer learning based on three pre-trained networks for creating deep features. Bilinear interpolation was applied to decompose the input image into four images of successively smaller dimensions, constituting a four-level pyramid for downstream feature generation with the pre-trained networks. Neighborhood component analysis was applied to the generated features to select each network's 1,000 most informative features, which were fed to support vector machine classifier for automated classification using a ten-fold cross-validation strategy. Our proposed model was validated using a new US image dataset containing 1,038 images divided into eight BI-RADS classes and histopathological results. We defined three classification schemes: Case 1 involved the classification of all images into eight categories; Case 2, classification of breast US images into five BI-RADS classes; and Case 3, classification of BI-RADS 4 lesions into benign versus malignant classes. Our PTDFG-based transfer learning model attained accuracy rates of 79.29%, 80.42%, and 88.67% for Case 1, Case 2, and Case 3, respectively.
超声(US)是一种重要的成像方式,用于评估乳腺病变的恶性特征。在过去的十年中,已经开发了许多机器学习模型,用于自动区分乳腺 US 图像中的乳腺癌与正常组织,但很少有模型根据乳腺影像报告和数据系统(BI-RADS)分类对图像进行分类。本研究旨在使用 BI-RADS 分类框架和新的多类 US 图像数据集开发一种用于分类 US 乳腺病变的模型。我们提出了一种深度模型,该模型结合了新颖的金字塔三重深度特征生成器(PTDFG)和基于三个预训练网络的迁移学习,用于创建深度特征。双线性插值应用于将输入图像分解为四个尺寸逐渐减小的图像,构成用于下游特征生成的四级金字塔,其中包括三个预训练网络。邻域成分分析应用于生成的特征,以选择每个网络的 1000 个最具信息量的特征,这些特征被馈送到支持向量机分类器中,使用十折交叉验证策略进行自动分类。我们使用包含 1038 张图像的新 US 图像数据集和组织病理学结果验证了我们提出的模型。我们定义了三种分类方案:方案 1 涉及将所有图像分为八个类别;方案 2 将乳腺 US 图像分为五个 BI-RADS 类别;方案 3 将 BI-RADS 4 病变分为良性和恶性类别。我们的基于 PTDFG 的迁移学习模型在方案 1、方案 2 和方案 3 中的准确率分别为 79.29%、80.42%和 88.67%。
Int J Med Inform. 2024-9
Biomed Eng Online. 2019-1-24
J Imaging Inform Med. 2024-12
Healthcare (Basel). 2023-9-13