Govea Jaime, Gutierrez Rommel, Villegas-Ch William
Escuela de Ingeniería en Ciberseguridad, FICA, Universidad de Las Américas, Quito, Ecuador.
Front Artif Intell. 2024 Sep 5;7:1410790. doi: 10.3389/frai.2024.1410790. eCollection 2024.
In today's information age, recommender systems have become an essential tool to filter and personalize the massive data flow to users. However, these systems' increasing complexity and opaque nature have raised concerns about transparency and user trust. Lack of explainability in recommendations can lead to ill-informed decisions and decreased confidence in these advanced systems. Our study addresses this problem by integrating explainability techniques into recommendation systems to improve both the precision of the recommendations and their transparency. We implemented and evaluated recommendation models on the MovieLens and Amazon datasets, applying explainability methods like LIME and SHAP to disentangle the model decisions. The results indicated significant improvements in the precision of the recommendations, with a notable increase in the user's ability to understand and trust the suggestions provided by the system. For example, we saw a 3% increase in recommendation precision when incorporating these explainability techniques, demonstrating their added value in performance and improving the user experience.
在当今信息时代,推荐系统已成为向用户过滤和个性化海量数据流的重要工具。然而,这些系统日益增加的复杂性和不透明性引发了对透明度和用户信任的担忧。推荐缺乏可解释性可能导致决策失误以及对这些先进系统信心的下降。我们的研究通过将可解释性技术集成到推荐系统中来解决这一问题,以提高推荐的准确性及其透明度。我们在MovieLens和亚马逊数据集上实现并评估了推荐模型,应用诸如LIME和SHAP等可解释性方法来剖析模型决策。结果表明推荐的准确性有显著提高,用户理解和信任系统提供建议的能力也显著增强。例如,在纳入这些可解释性技术时,我们看到推荐准确性提高了3%,证明了它们在性能方面的附加价值并改善了用户体验。