Suppr超能文献

使用可解释人工智能(XAI)技术剖析食品配送服务评论的深度学习模型

Unboxing Deep Learning Model of Food Delivery Service Reviews Using Explainable Artificial Intelligence (XAI) Technique.

作者信息

Adak Anirban, Pradhan Biswajeet, Shukla Nagesh, Alamri Abdullah

机构信息

Centre for Advanced Modelling and Geospatial Information Systems (CAMGIS), Faculty of Engineering & IT, School of Civil and Environmental Engineering, University of Technology Sydney, Sydney, NSW 2007, Australia.

Earth Observation Centre, Institute of Climate Change, University Kebangsaan Malaysia, UKM, Bangi 43600, Selangor, Malaysia.

出版信息

Foods. 2022 Jul 8;11(14):2019. doi: 10.3390/foods11142019.

Abstract

The demand for food delivery services (FDSs) during the COVID-19 crisis has been fuelled by consumers who prefer to order meals online and have it delivered to their door than to wait at a restaurant. Since many restaurants moved online and joined FDSs such as Uber Eats, Menulog, and Deliveroo, customer reviews on internet platforms have become a valuable source of information about a company's performance. FDS organisations strive to collect customer complaints and effectively utilise the information to identify improvements needed to enhance customer satisfaction. However, only a few customer opinions are addressed because of the large amount of customer feedback data and lack of customer service consultants. Organisations can use artificial intelligence (AI) instead of relying on customer service experts and find solutions on their own to save money as opposed to reading each review. Based on the literature, deep learning (DL) methods have shown remarkable results in obtaining better accuracy when working with large datasets in other domains, but lack explainability in their model. Rapid research on explainable AI (XAI) to explain predictions made by opaque models looks promising but remains to be explored in the FDS domain. This study conducted a sentiment analysis by comparing simple and hybrid DL techniques (LSTM, Bi-LSTM, Bi-GRU-LSTM-CNN) in the FDS domain and explained the predictions using SHapley Additive exPlanations (SHAP) and Local Interpretable Model-Agnostic Explanations (LIME). The DL models were trained and tested on the customer review dataset extracted from the ProductReview website. Results showed that the LSTM, Bi-LSTM and Bi-GRU-LSTM-CNN models achieved an accuracy of 96.07%, 95.85% and 96.33%, respectively. The model should exhibit fewer false negatives because FDS organisations aim to identify and address each and every customer complaint. The LSTM model was chosen over the other two DL models, Bi-LSTM and Bi-GRU-LSTM-CNN, due to its lower rate of false negatives. XAI techniques, such as SHAP and LIME, revealed the feature contribution of the words used towards positive and negative sentiments, which were used to validate the model.

摘要

在新冠疫情危机期间,外卖服务(FDS)的需求因消费者而增加,他们更倾向于在线订餐并让餐食送到家门口,而不是在餐厅等待。由于许多餐厅转向线上并加入了诸如优步外卖、Menulog和Deliveroo等外卖服务平台,互联网平台上的客户评论已成为有关公司业绩的宝贵信息来源。外卖服务机构努力收集客户投诉,并有效利用这些信息来确定提高客户满意度所需的改进措施。然而,由于客户反馈数据量庞大且缺乏客服顾问,只有少数客户意见得到处理。机构可以使用人工智能(AI),而不是依赖客服专家,自行寻找解决方案以节省资金,而不是逐篇阅读每条评论。根据文献,深度学习(DL)方法在处理其他领域的大型数据集时,在获得更高准确性方面已显示出显著成果,但其模型缺乏可解释性。对可解释人工智能(XAI)的快速研究,以解释不透明模型所做的预测,看起来很有前景,但在FDS领域仍有待探索。本研究通过比较FDS领域中的简单和混合DL技术(长短期记忆网络(LSTM)、双向长短期记忆网络(Bi-LSTM)、双向门控循环单元-长短期记忆网络-卷积神经网络(Bi-GRU-LSTM-CNN))进行了情感分析,并使用夏普利值附加解释(SHAP)和局部可解释模型无关解释(LIME)来解释预测结果。DL模型在从ProductReview网站提取的客户评论数据集上进行了训练和测试。结果表明,LSTM、Bi-LSTM和Bi-GRU-LSTM-CNN模型的准确率分别达到了96.07%、95.85%和96.33%。该模型应减少假阴性情况,因为外卖服务机构旨在识别并处理每一个客户投诉。由于LSTM模型的假阴性率较低,因此在其他两个DL模型Bi-LSTM和Bi-GRU-LSTM-CNN中选择了LSTM模型。SHAP和LIME等XAI技术揭示了用于表达积极和消极情绪的词汇的特征贡献,这些被用于验证模型。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c10c/9320924/958f8e7dfa5b/foods-11-02019-g001.jpg

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验