Department of Dairy and Process Engineering, Faculty of Food Science and Nutrition, Poznań University of Life Sciences, 31 Wojska Polskiego St., 60-624 Poznan, Poland.
Sensors (Basel). 2024 May 17;24(10):3198. doi: 10.3390/s24103198.
Recently, explainability in machine and deep learning has become an important area in the field of research as well as interest, both due to the increasing use of artificial intelligence (AI) methods and understanding of the decisions made by models. The explainability of artificial intelligence (XAI) is due to the increasing consciousness in, among other things, data mining, error elimination, and learning performance by various AI algorithms. Moreover, XAI will allow the decisions made by models in problems to be more transparent as well as effective. In this study, models from the 'glass box' group of Decision Tree, among others, and the 'black box' group of Random Forest, among others, were proposed to understand the identification of selected types of currant powders. The learning process of these models was carried out to determine accuracy indicators such as accuracy, precision, recall, and F1-score. It was visualized using Local Interpretable Model Agnostic Explanations (LIMEs) to predict the effectiveness of identifying specific types of blackcurrant powders based on texture descriptors such as entropy, contrast, correlation, dissimilarity, and homogeneity. Bagging (Bagging_100), Decision Tree (DT0), and Random Forest (RF7_gini) proved to be the most effective models in the framework of currant powder interpretability. The measures of classifier performance in terms of accuracy, precision, recall, and F1-score for Bagging_100, respectively, reached values of approximately 0.979. In comparison, DT0 reached values of 0.968, 0.972, 0.968, and 0.969, and RF7_gini reached values of 0.963, 0.964, 0.963, and 0.963. These models achieved classifier performance measures of greater than 96%. In the future, XAI using agnostic models can be an additional important tool to help analyze data, including food products, even online.
最近,机器学习和深度学习的可解释性在研究领域以及人们的兴趣中成为一个重要领域,这既是由于人工智能 (AI) 方法的使用越来越多,也是由于对模型做出的决策的理解。人工智能的可解释性 (XAI) 归因于数据挖掘、错误消除和各种 AI 算法的学习性能等方面的意识不断增强。此外,XAI 将使模型在问题中做出的决策更加透明和有效。在这项研究中,提出了决策树等“玻璃箱”组和随机森林等“黑箱”组的模型,以理解选定类型的醋栗粉的识别。这些模型的学习过程是为了确定准确性指标,如准确性、精度、召回率和 F1 分数。使用局部可解释模型不可知解释 (LIMEs) 对模型进行可视化,以基于熵、对比度、相关性、相异性和同质性等纹理描述符预测识别特定类型的黑加仑粉的效果。袋装(Bagging_100)、决策树(DT0)和随机森林(RF7_gini)在醋栗粉可解释性框架中被证明是最有效的模型。Bagging_100 的分类器性能度量在准确性、精度、召回率和 F1 分数方面分别达到了 0.979 左右的值。相比之下,DT0 达到了 0.968、0.972、0.968 和 0.969,RF7_gini 达到了 0.963、0.964、0.963 和 0.963。这些模型的分类器性能度量均大于 96%。在未来,使用不可知模型的 XAI 可以成为帮助分析数据(包括食品产品)的另一个重要工具,甚至可以在线进行。