Suppr超能文献

基于病理图像的乳腺癌诊断的增强加性角距损失

Boosted Additive Angular Margin Loss for breast cancer diagnosis from histopathological images.

机构信息

University of the Basque Country UPV/EHU, San Sebastian, Spain.

Ho Chi Minh City Open University, Ho Chi Minh City, Viet Nam.

出版信息

Comput Biol Med. 2023 Nov;166:107528. doi: 10.1016/j.compbiomed.2023.107528. Epub 2023 Sep 22.

Abstract

Pathologists use biopsies and microscopic examination to accurately diagnose breast cancer. This process is time-consuming, labor-intensive, and costly. Convolutional neural networks (CNNs) offer an efficient and highly accurate approach to reduce analysis time and automate the diagnostic workflow in pathology. However, the softmax loss commonly used in existing CNNs leads to noticeable ambiguity in decision boundaries and lacks a clear constraint for minimizing within-class variance. In response to this problem, a solution in the form of softmax losses based on angular margin was developed. These losses were introduced in the context of face recognition, with the goal of integrating an angular margin into the softmax loss. This integration improves discrimination features during CNN training by effectively increasing the distance between different classes while reducing the variance within each class. Despite significant progress, these losses are limited to target classes only when margin penalties are applied, which may not lead to optimal effectiveness. In this paper, we introduce Boosted Additive Angular Margin Loss (BAM) to obtain highly discriminative features for breast cancer diagnosis from histopathological images. BAM not only penalizes the angle between deep features and their target class weights, but also considers angles between deep features and non-target class weights. We performed extensive experiments on the publicly available BreaKHis dataset. BAM achieved remarkable accuracies of 99.79%, 99.86%, 99.96%, and 97.65% for magnification levels of 40X, 100X, 200X, and 400X, respectively. These results show an improvement in accuracy of 0.13%, 0.34%, and 0.21% for 40X, 100X, and 200X magnifications, respectively, compared to the baseline methods. Additional experiments were performed on the BACH dataset for breast cancer classification and on the widely accepted LFW and YTF datasets for face recognition to evaluate the generalization ability of the proposed loss function. The results show that BAM outperforms state-of-the-art methods by increasing the decision space between classes and minimizing intra-class variance, resulting in improved discriminability.

摘要

病理学家使用活检和显微镜检查来准确诊断乳腺癌。这个过程既耗时、费力又昂贵。卷积神经网络 (CNN) 提供了一种高效且高度准确的方法,可以减少分析时间并使病理学中的诊断工作流程自动化。然而,现有 CNN 中常用的 softmax 损失导致决策边界明显模糊,并且缺乏最小化类内方差的明确约束。针对这个问题,提出了一种基于角度边缘的 softmax 损失解决方案。这些损失是在人脸识别的背景下提出的,目标是在 softmax 损失中集成角度边缘。通过在 CNN 训练过程中有效增加不同类之间的距离,同时减少每个类内的方差,这种集成提高了判别特征。尽管取得了重大进展,但这些损失仅在应用边界惩罚时才限于目标类,这可能不会导致最佳效果。在本文中,我们引入了 Boosted Additive Angular Margin Loss (BAM),以便从组织病理学图像中获得用于乳腺癌诊断的高度判别特征。BAM 不仅惩罚了深度特征与其目标类权重之间的角度,还考虑了深度特征与其非目标类权重之间的角度。我们在公开的 BreaKHis 数据集上进行了广泛的实验。BAM 在放大倍数为 40X、100X、200X 和 400X 时,分别实现了 99.79%、99.86%、99.96%和 97.65%的出色精度。这些结果表明,与基线方法相比,在 40X、100X 和 200X 放大倍数下,精度分别提高了 0.13%、0.34%和 0.21%。此外,我们还在 BACH 数据集上进行了乳腺癌分类实验,并在广泛接受的 LFW 和 YTF 数据集上进行了人脸识别实验,以评估所提出的损失函数的泛化能力。结果表明,BAM 通过增加类之间的决策空间和最小化类内方差来提高可辨别性,从而优于最先进的方法。

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验