Mostafa Sakib, Mondal Debajyoti, Panjvani Karim, Kochian Leon, Stavness Ian
Department of Computer Science, University of Saskatchewan, Saskatoon, SK, Canada.
Global Institute for Food Security, University of Saskatchewan, Saskatoon, SK, Canada.
Front Artif Intell. 2023 Sep 19;6:1203546. doi: 10.3389/frai.2023.1203546. eCollection 2023.
The increasing human population and variable weather conditions, due to climate change, pose a threat to the world's food security. To improve global food security, we need to provide breeders with tools to develop crop cultivars that are more resilient to extreme weather conditions and provide growers with tools to more effectively manage biotic and abiotic stresses in their crops. Plant phenotyping, the measurement of a plant's structural and functional characteristics, has the potential to inform, improve and accelerate both breeders' selections and growers' management decisions. To improve the speed, reliability and scale of plant phenotyping procedures, many researchers have adopted deep learning methods to estimate phenotypic information from images of plants and crops. Despite the successful results of these image-based phenotyping studies, the representations learned by deep learning models remain difficult to interpret, understand, and explain. For this reason, deep learning models are still considered to be black boxes. Explainable AI (XAI) is a promising approach for opening the deep learning model's black box and providing plant scientists with image-based phenotypic information that is interpretable and trustworthy. Although various fields of study have adopted XAI to advance their understanding of deep learning models, it has yet to be well-studied in the context of plant phenotyping research. In this review article, we reviewed existing XAI studies in plant shoot phenotyping, as well as related domains, to help plant researchers understand the benefits of XAI and make it easier for them to integrate XAI into their future studies. An elucidation of the representations within a deep learning model can help researchers explain the model's decisions, relate the features detected by the model to the underlying plant physiology, and enhance the trustworthiness of image-based phenotypic information used in food production systems.
由于气候变化,不断增长的人口和多变的天气状况对全球粮食安全构成了威胁。为了改善全球粮食安全,我们需要为育种者提供工具,以培育出对极端天气条件更具韧性的作物品种,并为种植者提供工具,以便他们更有效地管理作物中的生物和非生物胁迫。植物表型分析,即对植物结构和功能特征的测量,有潜力为育种者的选择以及种植者的管理决策提供信息、加以改进并加速进程。为了提高植物表型分析程序的速度、可靠性和规模,许多研究人员采用深度学习方法从植物和作物图像中估计表型信息。尽管这些基于图像的表型分析研究取得了成功,但深度学习模型所学习到的表示仍然难以解释、理解和说明。因此,深度学习模型仍被视为黑箱。可解释人工智能(XAI)是一种很有前景的方法,可以打开深度学习模型的黑箱,并为植物科学家提供可解释且值得信赖的基于图像的表型信息。尽管各个研究领域都采用了XAI来增进对深度学习模型的理解,但在植物表型分析研究的背景下,它尚未得到充分研究。在这篇综述文章中,我们回顾了植物地上部分表型分析以及相关领域中现有的XAI研究,以帮助植物研究人员了解XAI的益处,并使他们更容易将XAI整合到未来的研究中。阐明深度学习模型中的表示可以帮助研究人员解释模型的决策,将模型检测到的特征与潜在的植物生理学联系起来,并增强食品生产系统中基于图像的表型信息的可信度。