Suppr超能文献

星座损失:提高深度度量学习损失函数对组织病理学图像进行最优嵌入的效率

Constellation Loss: Improving the Efficiency of Deep Metric Learning Loss Functions for the Optimal Embedding of histopathological images.

作者信息

Medela Alfonso, Picon Artzai

机构信息

TECNALIA, Basque Research and Technology Alliance (BRTA), Parque Tecnológico de Bizkaia, C/ Geldo. Edificio 700, E-48160 Derio - Bizkaia (Spain).

University of the Basque Country, Alameda de Urquijo s/n, 48013 Bilbao, Bizkaia (Spain).

出版信息

J Pathol Inform. 2020 Nov 26;11:38. doi: 10.4103/jpi.jpi_41_20. eCollection 2020.

Abstract

BACKGROUND

Deep learning diagnostic algorithms are proving comparable results with human experts in a wide variety of tasks, and they still require a huge amount of well-annotated data for training, which is often non affordable. Metric learning techniques have allowed a reduction in the required annotated data allowing few-shot learning over deep learning architectures.

AIMS AND OBJECTIVES

In this work, we analyze the state-of-the-art loss functions such as triplet loss, contrastive loss, and multi-class N-pair loss for the visual embedding extraction of hematoxylin and eosin (H&E) microscopy images and we propose a novel constellation loss function that takes advantage of the visual distances of the embeddings of the negative samples and thus, performing a regularization that increases the quality of the extracted embeddings.

MATERIALS AND METHODS

To this end, we employed the public H&E imaging dataset from the University Medical Center Mannheim (Germany) that contains tissue samples from low-grade and high-grade primary tumors of digitalized colorectal cancer tissue slides. These samples are divided into eight different textures (1. tumour epithelium, 2. simple stroma, 3. complex stroma, 4. immune cells, 5. debris and mucus, 6. mucosal glands, 7. adipose tissue and 8. background,). The dataset was divided randomly into train and test splits and the training split was used to train a classifier to distinguish among the different textures with just 20 training images. The process was repeated 10 times for each loss function. Performance was compared both for cluster compactness and for classification accuracy on separating the aforementioned textures.

RESULTS

Our results show that the proposed loss function outperforms the other methods by obtaining more compact clusters (Davis-Boulding: 1.41 ± 0.08, Silhouette: 0.37 ± 0.02) and better classification capabilities (accuracy: 85.0 ± 0.6) over H and E microscopy images. We demonstrate that the proposed constellation loss can be successfully used in the medical domain in situations of data scarcity.

摘要

背景

深度学习诊断算法在各种任务中已被证明能产生与人类专家相当的结果,但它们仍需要大量标注良好的数据进行训练,而这通常成本过高。度量学习技术减少了所需的标注数据,使得在深度学习架构上能够进行少样本学习。

目的

在这项工作中,我们分析了用于苏木精和伊红(H&E)显微镜图像视觉嵌入提取的最先进损失函数,如三元组损失、对比损失和多类N对损失,并提出了一种新颖的星座损失函数,该函数利用负样本嵌入的视觉距离,从而进行正则化,提高提取嵌入的质量。

材料与方法

为此,我们使用了德国曼海姆大学医学中心的公共H&E成像数据集,该数据集包含数字化结直肠癌组织切片的低级别和高级别原发性肿瘤的组织样本。这些样本分为八种不同的纹理(1.肿瘤上皮,2.简单基质,3.复杂基质,4.免疫细胞,5.碎片和黏液,6.黏膜腺,7.脂肪组织和8.背景)。数据集被随机分为训练集和测试集,训练集用于训练一个分类器,仅使用20张训练图像来区分不同的纹理。每个损失函数都重复该过程10次。比较了在分离上述纹理时聚类紧凑性和分类准确性方面的性能。

结果

我们的结果表明,所提出的损失函数在H&E显微镜图像上通过获得更紧凑的聚类(戴维斯-布尔丁指数:1.41±0.08,轮廓系数:0.37±0.02)和更好的分类能力(准确率:85.0±0.6)优于其他方法。我们证明了所提出的星座损失可以在数据稀缺的情况下成功应用于医学领域。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d1d7/8020841/00c1d4fb8955/JPI-11-38-g001.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验