Suppr超能文献

将更快的区域卷积神经网络(Faster R-CNN)应用于疟疾图像的目标检测

Applying Faster R-CNN for Object Detection on Malaria Images.

作者信息

Hung Jane, Lopes Stefanie C P, Nery Odailton Amaral, Nosten Francois, Ferreira Marcelo U, Duraisingh Manoj T, Marti Matthias, Ravel Deepali, Rangel Gabriel, Malleret Benoit, Lacerda Marcus V G, Rénia Laurent, Costa Fabio T M, Carpenter Anne E

机构信息

Massachusetts Institute of Technology.

Instituto Leônidas e Maria Deane, Fundação Oswaldo Cruz (FIOCRUZ); Fundação de Medicina Tropical Dr. Heitor Vieira Dourado, Gerência de Malária.

出版信息

Conf Comput Vis Pattern Recognit Workshops. 2017 Jul;2017:808-813. doi: 10.1109/cvprw.2017.112. Epub 2021 Nov 18.

Abstract

Deep learning based models have had great success in object detection, but the state of the art models have not yet been widely applied to biological image data. We apply for the first time an object detection model previously used on natural images to identify cells and recognize their stages in brightfield microscopy images of malaria-infected blood. Many micro-organisms like malaria parasites are still studied by expert manual inspection and hand counting. This type of object detection task is challenging due to factors like variations in cell shape, density, and color, and uncertainty of some cell classes. In addition, annotated data useful for training is scarce, and the class distribution is inherently highly imbalanced due to the dominance of uninfected red blood cells. We use Faster Region-based Convolutional Neural Network (Faster R-CNN), one of the top performing object detection models in recent years, pre-trained on ImageNet but fine tuned with our data, and compare it to a baseline, which is based on a traditional approach consisting of cell segmentation, extraction of several single-cell features, and classification using random forests. To conduct our initial study, we collect and label a dataset of 1300 fields of view consisting of around 100,000 individual cells. We demonstrate that Faster R-CNN outperforms our baseline and put the results in context of human performance.

摘要

基于深度学习的模型在目标检测方面取得了巨大成功,但目前最先进的模型尚未广泛应用于生物图像数据。我们首次将先前用于自然图像的目标检测模型应用于识别疟原虫感染血液的明场显微镜图像中的细胞并识别其阶段。许多微生物,如疟原虫,仍然通过专家手动检查和手工计数来研究。由于细胞形状、密度和颜色的变化以及某些细胞类别的不确定性等因素,这种类型的目标检测任务具有挑战性。此外,用于训练的标注数据稀缺,并且由于未感染红细胞占主导地位,类分布本身高度不平衡。我们使用基于区域的快速卷积神经网络(Faster R-CNN),它是近年来表现最佳的目标检测模型之一,在ImageNet上进行预训练,但使用我们的数据进行微调,并将其与基于传统方法的基线进行比较,该传统方法包括细胞分割、提取几个单细胞特征以及使用随机森林进行分类。为了进行我们的初步研究,我们收集并标记了一个包含1300个视野的数据集,其中包含约100,000个单个细胞。我们证明Faster R-CNN优于我们的基线,并将结果与人类表现进行了对比。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c8dd/8691760/5bf583dc1368/nihms-1682216-f0001.jpg

相似文献

1
Applying Faster R-CNN for Object Detection on Malaria Images.将更快的区域卷积神经网络(Faster R-CNN)应用于疟疾图像的目标检测
Conf Comput Vis Pattern Recognit Workshops. 2017 Jul;2017:808-813. doi: 10.1109/cvprw.2017.112. Epub 2021 Nov 18.
7
TEM virus images: Benchmark dataset and deep learning classification.TEM 病毒图像:基准数据集和深度学习分类。
Comput Methods Programs Biomed. 2021 Sep;209:106318. doi: 10.1016/j.cmpb.2021.106318. Epub 2021 Jul 29.
10
S-CNN: Subcategory-Aware Convolutional Networks for Object Detection.S-CNN:用于目标检测的子类别感知卷积网络
IEEE Trans Pattern Anal Mach Intell. 2018 Oct;40(10):2522-2528. doi: 10.1109/TPAMI.2017.2756936. Epub 2017 Sep 26.

引用本文的文献

4
A transfer learning approach to identify Plasmodium in microscopic images.一种用于识别显微镜图像中疟原虫的迁移学习方法。
PLoS Comput Biol. 2024 Aug 5;20(8):e1012327. doi: 10.1371/journal.pcbi.1012327. eCollection 2024 Aug.

本文引用的文献

1
Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks.更快的 R-CNN:基于区域建议网络的实时目标检测。
IEEE Trans Pattern Anal Mach Intell. 2017 Jun;39(6):1137-1149. doi: 10.1109/TPAMI.2016.2577031. Epub 2016 Jun 6.

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验