Suppr超能文献

基于多描述符的无监督学习用于全切片图像诊断

Unsupervised Learning Based on Multiple Descriptors for WSIs Diagnosis.

作者信息

Sheikh Taimoor Shakeel, Kim Jee-Yeon, Shim Jaesool, Cho Migyung

机构信息

Department of Computer & Media Engineering, Tongmyong University, Busan 48520, Korea.

Department of Pathology, Pusan National University Yangsan Hospital, School of Medicine, Pusan National University, Yangsan-si 50612, Korea.

出版信息

Diagnostics (Basel). 2022 Jun 16;12(6):1480. doi: 10.3390/diagnostics12061480.

Abstract

An automatic pathological diagnosis is a challenging task because histopathological images with different cellular heterogeneity representations are sometimes limited. To overcome this, we investigated how the holistic and local appearance features with limited information can be fused to enhance the analysis performance. We propose an unsupervised deep learning model for whole-slide image diagnosis, which uses stacked autoencoders simultaneously feeding multiple-image descriptors such as the histogram of oriented gradients and local binary patterns along with the original image to fuse the heterogeneous features. The pre-trained latent vectors are extracted from each autoencoder, and these fused feature representations are utilized for classification. We observed that training with additional descriptors helps the model to overcome the limitations of multiple variants and the intricate cellular structure of histopathology data by various experiments. Our model outperforms existing state-of-the-art approaches by achieving the highest accuracies of 87.2 for ICIAR2018, 94.6 for Dartmouth, and other significant metrics for public benchmark datasets. Our model does not rely on a specific set of pre-trained features based on classifiers to achieve high performance. Unsupervised spaces are learned from the number of independent multiple descriptors and can be used with different variants of classifiers to classify cancer diseases from whole-slide images. Furthermore, we found that the proposed model classifies the types of breast and lung cancer similar to the viewpoint of pathologists by visualization. We also designed our whole-slide image processing toolbox to extract and process the patches from whole-slide images.

摘要

自动病理诊断是一项具有挑战性的任务,因为具有不同细胞异质性表现的组织病理学图像有时是有限的。为了克服这一问题,我们研究了如何融合信息有限的整体和局部外观特征以提高分析性能。我们提出了一种用于全切片图像诊断的无监督深度学习模型,该模型使用堆叠自编码器同时输入多个图像描述符,如定向梯度直方图和局部二值模式以及原始图像,以融合异质特征。从每个自编码器中提取预训练的潜在向量,并将这些融合的特征表示用于分类。通过各种实验,我们观察到使用额外的描述符进行训练有助于模型克服多种变体的局限性以及组织病理学数据复杂的细胞结构。我们的模型在ICIAR2018数据集上达到了87.2的最高准确率,在达特茅斯数据集上达到了94.6的最高准确率,以及在其他公共基准数据集上的其他显著指标,优于现有的最先进方法。我们的模型不依赖于基于分类器的特定预训练特征集来实现高性能。从独立的多个描述符数量中学习无监督空间,并可与不同变体的分类器一起用于从全切片图像中对癌症疾病进行分类。此外,通过可视化我们发现所提出的模型从病理学家的角度对乳腺癌和肺癌类型进行了分类。我们还设计了全切片图像处理工具箱,用于从全切片图像中提取和处理图像块。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/eb2c/9222016/f076e9918d5f/diagnostics-12-01480-g001.jpg

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验