Suppr超能文献

基于高光谱成像的植物物种分类——一种轻量级卷积神经网络模型

Plant Species Classification Based on Hyperspectral Imaging a Lightweight Convolutional Neural Network Model.

作者信息

Liu Keng-Hao, Yang Meng-Hsien, Huang Sheng-Ting, Lin Chinsu

机构信息

Department of Mechanical and Electro-Mechanical Engineering, National Sun Yat-sen University, Kaohsiung, Taiwan.

Department of Forestry and Natural Resources, National Chiayi University, Chiayi, Taiwan.

出版信息

Front Plant Sci. 2022 Apr 13;13:855660. doi: 10.3389/fpls.2022.855660. eCollection 2022.

Abstract

In recent years, many image-based approaches have been proposed to classify plant species. Most methods utilized red green blue (RGB) imaging materials and designed custom features to classify the plant images using machine learning algorithms. Those works primarily focused on analyzing single-leaf images instead of live-crown images. Without considering the additional features of the leaves' color and spatial pattern, they failed to handle cases that contained leaves similar in appearance due to the limited spectral information of RGB imaging. To tackle this dilemma, this study proposes a novel framework that combines hyperspectral imaging (HSI) and deep learning techniques for plant image classification. We built a plant image dataset containing 1,500 images of 30 different plant species taken by a 470-900 nm hyperspectral camera and designed a lightweight conventional neural network (CNN) model (LtCNN) to perform image classification. Several state-of-art CNN classifiers are chosen for comparison. The impact of using different band combinations as the network input is also investigated. Results show that using simulated RGB images achieves a kappa coefficient of nearly 0.90 while using the combination of 3-band RGB and 3-band near-infrared images can improve to 0.95. It is also found that the proposed LtCNN can obtain a satisfactory performance of plant classification (kappa = 0.95) using critical spectral features of the green edge (591 nm), red-edge (682 nm), and near-infrared (762 nm) bands. This study also demonstrates the excellent adaptability of the LtCNN model in recognizing leaf features of plant live-crown images while using a relatively smaller number of training samples than complex CNN models such as AlexNet, GoogLeNet, and VGGNet.

摘要

近年来,人们提出了许多基于图像的方法来对植物物种进行分类。大多数方法使用红、绿、蓝(RGB)成像材料,并设计自定义特征,使用机器学习算法对植物图像进行分类。这些工作主要集中在分析单叶图像,而不是树冠图像。由于没有考虑叶片颜色和空间模式的附加特征,由于RGB成像的光谱信息有限,它们无法处理包含外观相似叶片的情况。为了解决这一困境,本研究提出了一种将高光谱成像(HSI)和深度学习技术相结合的新型框架,用于植物图像分类。我们构建了一个植物图像数据集,其中包含由470-900nm高光谱相机拍摄的30种不同植物物种的1500张图像,并设计了一个轻量级的传统神经网络(CNN)模型(LtCNN)来进行图像分类。选择了几种最先进的CNN分类器进行比较。还研究了使用不同波段组合作为网络输入的影响。结果表明,使用模拟RGB图像时,kappa系数接近0.90,而使用3波段RGB和3波段近红外图像的组合可提高到0.95。研究还发现,所提出的LtCNN使用绿色边缘(591nm)、红边(682nm)和近红外(762nm)波段的关键光谱特征,可以获得令人满意的植物分类性能(kappa = 0.95)。本研究还证明了LtCNN模型在识别植物树冠图像的叶片特征方面具有出色的适应性,同时使用的训练样本数量比AlexNet、GoogLeNet和VGGNet等复杂CNN模型相对较少。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/038a/9044035/885730f1eb3a/fpls-13-855660-g001.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验