Suppr超能文献

基于组织病理学图像的乳腺癌检测中的特征泛化。

Feature Generalization for Breast Cancer Detection in Histopathological Images.

机构信息

Programme of Information Technology, Xavier Institute of Social Service, Ranchi, 834001, Jharkhand, India.

Department of Computer Science, Punjabi University, Patiala, India.

出版信息

Interdiscip Sci. 2022 Jun;14(2):566-581. doi: 10.1007/s12539-022-00515-1. Epub 2022 Apr 28.

Abstract

Recent period has witnessed benchmarked performance of transfer learning using deep architectures in computer-aided diagnosis (CAD) of breast cancer. In this perspective, the pre-trained neural network needs to be fine-tuned with relevant data to extract useful features from the dataset. However, in addition to the computational overhead, it suffers the curse of overfitting in case of feature extraction from smaller datasets. Handcrafted feature extraction techniques as well as feature extraction using pre-trained deep networks come into rescue in aforementioned situation and have proved to be much more efficient and lightweight compared to deep architecture-based transfer learning techniques. This research has identified the competence of classifying breast cancer images using feature engineering and representation learning over the established and contemporary notion of using transfer learning techniques. Moreover, it has revealed superior feature learning capacity with feature fusion in contrast to the conventional belief of understanding unknown feature patterns better with representation learning alone. Experiments have been conducted on two different and popular breast cancer image datasets, namely, KIMIA Path960 and BreakHis datasets. A comparison of image-level accuracy is performed on these datasets using the above-mentioned feature extraction techniques. Image level accuracy of 97.81% is achieved for KIMIA Path960 dataset using individual features extracted with handcrafted (color histogram) technique. Fusion of uniform Local Binary Pattern (uLBP) and color histogram features has resulted in 99.17% of highest accuracy for the same dataset. Experimentation with BreakHis dataset has resulted in highest classification accuracy of 88.41% with color histogram features for images with 200X magnification factor. Finally, the results are contrasted to that of state-of-the-art and superior performances are observed on many occasions with the proposed fusion-based techniques. In case of BreakHis dataset, the highest accuracies 87.60% (with least standard deviation) and 85.77% are recorded for 200X and 400X magnification factors, respectively, and the results for the aforesaid magnification factors of images have exceeded the state-of-the-art.

摘要

近期,基于深度架构的迁移学习在计算机辅助乳腺癌诊断(CAD)中取得了显著的性能。在这种情况下,预训练的神经网络需要用相关数据进行微调,以便从数据集中提取有用的特征。然而,除了计算开销之外,在从较小的数据集提取特征时,它还会受到过拟合的困扰。在上述情况下,手工制作的特征提取技术以及使用预训练的深度网络进行特征提取就派上了用场,与基于深度架构的迁移学习技术相比,它们被证明更加高效和轻量级。这项研究已经确定了使用特征工程和表示学习对乳腺癌图像进行分类的能力,超过了使用迁移学习技术的既定和当代概念。此外,与传统的仅凭表示学习更好地理解未知特征模式的观念相比,它通过特征融合揭示了更好的特征学习能力。在两个不同的、流行的乳腺癌图像数据集,即 KIMIA Path960 和 BreakHis 数据集上进行了实验。使用上述特征提取技术在这些数据集上进行了图像级精度的比较。使用手工制作的(颜色直方图)技术提取的单个特征,在 KIMIA Path960 数据集上实现了 97.81%的图像级精度。相同数据集的最高精度达到 99.17%,是通过均匀局部二值模式(uLBP)和颜色直方图特征的融合实现的。在 BreakHis 数据集上的实验,在 200X 放大倍数下,颜色直方图特征的分类精度最高为 88.41%。最后,将结果与最先进的技术进行对比,在许多情况下,所提出的基于融合的技术都表现出了卓越的性能。在 BreakHis 数据集的情况下,对于 200X 和 400X 放大倍数,分别记录了最高精度 87.60%(标准差最小)和 85.77%,并且对于上述放大倍数的图像,结果超过了最先进的技术。

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验