Taheri-Garavand Amin, Nasiri Amin, Fanourakis Dimitrios, Fatahi Soodabeh, Omid Mahmoud, Nikoloudakis Nikolaos
Mechanical Engineering of Biosystems Department, Lorestan University, Khorramabad P.O. Box 465, Iran.
Department of Biosystems Engineering and Soil Science, University of Tennessee, Knoxville, TN 37996, USA.
Plants (Basel). 2021 Jul 9;10(7):1406. doi: 10.3390/plants10071406.
On-time seed variety recognition is critical to limit qualitative and quantitative yield loss and asynchronous crop production. The conventional method is a subjective and error-prone process, since it relies on human experts and usually requires accredited seed material. This paper presents a convolutional neural network (CNN) framework for automatic identification of chickpea varieties by using seed images in the visible spectrum (400-700 nm). Two low-cost devices were employed for image acquisition. Lighting and imaging (background, focus, angle, and camera-to-sample distance) conditions were variable. The VGG16 architecture was modified by a global average pooling layer, dense layers, a batch normalization layer, and a dropout layer. Distinguishing the intricate visual features of the diverse chickpea varieties and recognizing them according to these features was conceivable by the obtained model. A five-fold cross-validation was performed to evaluate the uncertainty and predictive efficiency of the CNN model. The modified deep learning model was able to recognize different chickpea seed varieties with an average classification accuracy of over 94%. In addition, the proposed vision-based model was very robust in seed variety identification, and independent of image acquisition device, light environment, and imaging settings. This opens the avenue for the extension into novel applications using mobile phones to acquire and process information in situ. The proposed procedure derives possibilities for deployment in the seed industry and mobile applications for fast and robust automated seed identification practices.
及时识别种子品种对于限制产量的质量和数量损失以及实现作物生产同步至关重要。传统方法是一个主观且容易出错的过程,因为它依赖于专家,并且通常需要经过认证的种子材料。本文提出了一种卷积神经网络(CNN)框架,用于通过使用可见光谱(400 - 700 nm)中的种子图像自动识别鹰嘴豆品种。使用了两种低成本设备进行图像采集。光照和成像(背景、焦点、角度以及相机到样本的距离)条件各不相同。VGG16架构通过全局平均池化层、全连接层、批归一化层和随机失活层进行了修改。所获得的模型能够区分不同鹰嘴豆品种复杂的视觉特征,并根据这些特征进行识别。进行了五折交叉验证以评估CNN模型的不确定性和预测效率。改进后的深度学习模型能够识别不同的鹰嘴豆种子品种,平均分类准确率超过94%。此外,所提出的基于视觉的模型在种子品种识别方面非常稳健,并且独立于图像采集设备、光照环境和成像设置。这为使用手机原位获取和处理信息扩展到新应用开辟了道路。所提出的方法为在种子行业和移动应用中进行快速且稳健的自动种子识别实践提供了部署可能性。