Huang Xin, Liu Tengsheng, Yu Yue
Department of Physical Education, Wuhan Institute of Technology, 430070, WuHan, China.
Intelligent Manufacturing College, Jinhua University of Vocational Technology, 321007, Jinhua, Zhejiang, China.
Sci Rep. 2025 Jul 15;15(1):25478. doi: 10.1038/s41598-025-10058-2.
Nowadays, breast cancer is one of the leading causes of death among women. This highlights the need for precise X-ray image analysis in the medical and imaging fields. In this study, we present an advanced perceptual deep learning framework that extracts key features from large X-ray datasets, mimicking human visual perception. We begin by using a large dataset of breast cancer images and apply the BING objectness measure to identify relevant visual and semantic patches. To manage the large number of object-aware patches, we propose a new ranking technique in the weak annotation context. This technique identifies the patches that are most aligned with human visual judgment. These key patches are then aggregated to extract meaningful features from each image. We leverage these features to train a multi-class SVM classifier, which categorizes the images into various breast cancer stages. The effectiveness of our deep learning model is demonstrated through extensive comparative analysis and visual examples.
如今,乳腺癌是女性主要死因之一。这凸显了医学和成像领域中精确X射线图像分析的必要性。在本研究中,我们提出了一种先进的感知深度学习框架,该框架从大型X射线数据集中提取关键特征,模仿人类视觉感知。我们首先使用一个乳腺癌图像的大型数据集,并应用BING目标性度量来识别相关的视觉和语义补丁。为了管理大量的目标感知补丁,我们在弱注释背景下提出了一种新的排序技术。该技术识别出与人类视觉判断最相符的补丁。然后将这些关键补丁聚合起来,从每张图像中提取有意义的特征。我们利用这些特征训练一个多类支持向量机分类器,将图像分类为不同的乳腺癌阶段。通过广泛的比较分析和视觉示例证明了我们深度学习模型的有效性。