Suppr超能文献

用于植物胁迫表型分析的可解释深度机器视觉框架

An explainable deep machine vision framework for plant stress phenotyping.

机构信息

Department of Mechanical Engineering, Iowa State University, Ames, IA 50011.

Department of Agronomy, Iowa State University, Ames, IA 50011.

出版信息

Proc Natl Acad Sci U S A. 2018 May 1;115(18):4613-4618. doi: 10.1073/pnas.1716999115. Epub 2018 Apr 16.

Abstract

Current approaches for accurate identification, classification, and quantification of biotic and abiotic stresses in crop research and production are predominantly visual and require specialized training. However, such techniques are hindered by subjectivity resulting from inter- and intrarater cognitive variability. This translates to erroneous decisions and a significant waste of resources. Here, we demonstrate a machine learning framework's ability to identify and classify a diverse set of foliar stresses in soybean [ (L.) Merr.] with remarkable accuracy. We also present an explanation mechanism, using the top-K high-resolution feature maps that isolate the visual symptoms used to make predictions. This unsupervised identification of visual symptoms provides a quantitative measure of stress severity, allowing for identification (type of foliar stress), classification (low, medium, or high stress), and quantification (stress severity) in a single framework without detailed symptom annotation by experts. We reliably identified and classified several biotic (bacterial and fungal diseases) and abiotic (chemical injury and nutrient deficiency) stresses by learning from over 25,000 images. The learned model is robust to input image perturbations, demonstrating viability for high-throughput deployment. We also noticed that the learned model appears to be agnostic to species, seemingly demonstrating an ability of transfer learning. The availability of an explainable model that can consistently, rapidly, and accurately identify and quantify foliar stresses would have significant implications in scientific research, plant breeding, and crop production. The trained model could be deployed in mobile platforms (e.g., unmanned air vehicles and automated ground scouts) for rapid, large-scale scouting or as a mobile application for real-time detection of stress by farmers and researchers.

摘要

当前,在作物研究和生产中,对生物和非生物胁迫进行准确识别、分类和量化的方法主要依赖于肉眼观察,这需要专门的培训。然而,这种技术受到了观察者之间和观察者内部认知变异性的主观性的限制。这导致了错误的决策和大量资源的浪费。在这里,我们展示了一个机器学习框架识别和分类大豆叶片多种胁迫的能力,其具有惊人的准确性。我们还提出了一种解释机制,使用 top-K 高分辨率特征图来隔离用于进行预测的视觉症状。这种对视觉症状的无监督识别提供了胁迫严重程度的定量衡量标准,允许在不依赖专家详细症状注释的情况下,在单个框架中进行识别(叶片胁迫类型)、分类(低、中或高胁迫)和量化(胁迫严重程度)。我们通过学习超过 25000 张图像可靠地识别和分类了几种生物(细菌和真菌病害)和非生物(化学伤害和营养缺乏)胁迫。所学习的模型对输入图像的扰动具有鲁棒性,表明其在高通量部署方面具有可行性。我们还注意到,所学习的模型似乎对物种是不可知的,似乎表现出了迁移学习的能力。如果有一个可解释的模型能够持续、快速和准确地识别和量化叶片胁迫,那么这将对科学研究、植物育种和作物生产产生重大影响。训练有素的模型可以部署在移动平台(如无人机和自动化地面侦察器)上,用于快速、大规模的侦察,或作为农民和研究人员实时检测胁迫的移动应用程序。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1756/5939070/f28f0594863f/pnas.1716999115fig01.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验