Suppr超能文献

基于上下文补丁建模的端到端乳腺肿瘤分类模型 - 用于图像分类的 BiLSTM 方法。

An end-to-end breast tumour classification model using context-based patch modelling - A BiLSTM approach for image classification.

机构信息

Department of Information Technology, Indian Institute of Information Technology Allahabad, Devghat, Jhalwa, Prayagraj 211015, India.

Department of Information Technology, Indian Institute of Information Technology Allahabad, Devghat, Jhalwa, Prayagraj 211015, India.

出版信息

Comput Med Imaging Graph. 2021 Jan;87:101838. doi: 10.1016/j.compmedimag.2020.101838. Epub 2020 Dec 4.

Abstract

Researchers working on computational analysis of Whole Slide Images (WSIs) in histopathology have primarily resorted to patch-based modelling due to large resolution of each WSI. The large resolution makes WSIs infeasible to be fed directly into the machine learning models due to computational constraints. However, due to patch-based analysis, most of the current methods fail to exploit the underlying spatial relationship among the patches. In our work, we have tried to integrate this relationship along with feature-based correlation among the extracted patches from the particular tumorous region. The tumour regions extracted from WSI have arbitrary dimensions having the range 20,570 to 195 pixels across width and 17,290 to 226 pixels across height. For the given task of classification, we have used BiLSTMs to model both forward and backward contextual relationship. Also, using RNN based model, the limitation of sequence size is eliminated which allows the modelling of variable size images within a deep learning model. We have also incorporated the effect of spatial continuity by exploring different scanning techniques used to sample patches. To establish the efficiency of our approach, we trained and tested our model on two datasets, microscopy images and WSI tumour regions. Both datasets were published by ICIAR BACH Challenge 2018. Finally, we compared our results with top 5 teams who participated in the BACH challenge and achieved the top accuracy of 90% for microscopy image dataset. For WSI tumour region dataset, we compared the classification results with state of the art deep learning networks such as ResNet, DenseNet, and InceptionV3 using maximum voting technique. We achieved the highest performance accuracy of 84%. We found out that BiLSTMs with CNN features have performed much better in modelling patches into an end-to-end Image classification network. Additionally, the variable dimensions of WSI tumour regions were used for classification without the need for resizing. This suggests that our method is independent of tumour image size and can process large dimensional images without losing the resolution details.

摘要

从事组织病理学全切片图像(WSI)计算分析的研究人员主要由于每个 WSI 的分辨率较大而采用基于补丁的建模方法。由于计算限制,大分辨率使得 WSI 不适宜直接输入到机器学习模型中。然而,由于基于补丁的分析,大多数当前的方法未能利用补丁之间的潜在空间关系。在我们的工作中,我们试图将这种关系与从特定肿瘤区域提取的补丁之间基于特征的相关性结合起来。从 WSI 中提取的肿瘤区域具有任意维度,其宽度范围为 20570 到 195 像素,高度范围为 17290 到 226 像素。对于分类任务,我们使用 BiLSTM 来建模正向和反向上下文关系。此外,使用基于 RNN 的模型消除了序列大小的限制,从而允许在深度学习模型中对大小可变的图像进行建模。我们还通过探索用于采样补丁的不同扫描技术来引入空间连续性的影响。为了证明我们方法的效率,我们在两个数据集上训练和测试了我们的模型,即显微镜图像和 WSI 肿瘤区域。这两个数据集都是由 ICIAR BACH 挑战赛 2018 发布的。最后,我们将我们的结果与参加 BACH 挑战赛的前 5 名团队进行了比较,并获得了显微镜图像数据集 90%的最高准确率。对于 WSI 肿瘤区域数据集,我们使用最大投票技术与 ResNet、DenseNet 和 InceptionV3 等最先进的深度学习网络的分类结果进行了比较。我们达到了 84%的最高性能准确率。我们发现,在将补丁建模成端到端图像分类网络方面,BiLSTM 与 CNN 特征的结合表现要好得多。此外,WSI 肿瘤区域的可变维度无需调整大小即可用于分类。这表明我们的方法独立于肿瘤图像大小,可以处理大尺寸图像而不会丢失分辨率细节。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验