Suppr超能文献

通过可解释人工智能理解作物育种中的植物表型。

Understanding plant phenotypes in crop breeding through explainable AI.

作者信息

Danilevicz Monica F, Upadhyaya Shriprabha R, Batley Jacqueline, Bennamoun Mohammed, Bayer Philipp E, Edwards David

机构信息

School of Biological Sciences and Centre for Applied Bioinformatics, University of Western Australia, Crawley, WA, Australia.

School of Physics, Mathematics and Computing, University of Western Australia, Crawley, WA, Australia.

出版信息

Plant Biotechnol J. 2025 Jun 26. doi: 10.1111/pbi.70208.

Abstract

Machine learning use in plant phenotyping has grown exponentially. These algorithms empowered the use of image data to measure plant traits rapidly and to predict the effect of genetic and environmental conditions on plant phenotype. However, the lack of interpretability in machine learning models has limited their usefulness in gaining insights into the underlying biological processes that drive plant phenotypes. Explainable AI (XAI) emerges to help understand the 'why' behind machine learning model predictions and allow researchers to investigate the most influential features that lead to prediction, classification or segmentation results. Understanding the mechanisms behind model prediction is also central to sanity-checking models, increasing model reliability and identifying dataset biases that may limit the model's applicability across different conditions. This review introduces the concept of XAI and presents current algorithms, emphasizing their suitability for different data types or machine learning algorithms. The use of XAI to leverage trait information is highlighted, showcasing how recent studies employed model explanations to recognize the features that impact plant phenotype. Overall, this review presents a framework for using XAI to gain insights into intricate biological processes driving plant phenotypes, underscoring the significance of transparency and interpretability in machine learning.

摘要

机器学习在植物表型分析中的应用呈指数级增长。这些算法使利用图像数据快速测量植物性状并预测遗传和环境条件对植物表型的影响成为可能。然而,机器学习模型缺乏可解释性,限制了它们在深入了解驱动植物表型的潜在生物学过程方面的作用。可解释人工智能(XAI)应运而生,以帮助理解机器学习模型预测背后的“原因”,并使研究人员能够探究导致预测、分类或分割结果的最具影响力的特征。理解模型预测背后的机制对于模型的合理性检查、提高模型可靠性以及识别可能限制模型在不同条件下适用性的数据集偏差也至关重要。本综述介绍了XAI的概念,并展示了当前的算法,强调了它们对不同数据类型或机器学习算法的适用性。重点介绍了利用XAI来利用性状信息,展示了最近的研究如何利用模型解释来识别影响植物表型的特征。总体而言,本综述提出了一个利用XAI深入了解驱动植物表型的复杂生物学过程的框架,强调了机器学习中透明度和可解释性的重要性。

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验