Suppr超能文献

通过可视化解释社交媒体文本上的情感分析结果。

Explaining sentiment analysis results on social media texts through visualization.

作者信息

Jain Rachna, Kumar Ashish, Nayyar Anand, Dewan Kritika, Garg Rishika, Raman Shatakshi, Ganguly Sahil

机构信息

Bhagwan Parshuram Institute of Technology, New Delhi, 110089 India.

School of Computer Science Engineering and Technology, Bennett University, Uttar Pradesh, India.

出版信息

Multimed Tools Appl. 2023;82(15):22613-22629. doi: 10.1007/s11042-023-14432-y. Epub 2023 Feb 2.

Abstract

Today, Artificial Intelligence is achieving prodigious real-time performance, thanks to growing computational data and power capacities. However, there is little knowledge about what system results convey; thus, they are at risk of being susceptible to bias, and with the roots of Artificial Intelligence ("AI") in almost every territory, even a minuscule bias can result in excessive damage. Efforts towards making AI interpretable have been made to address fairness, accountability, and transparency concerns. This paper proposes two unique methods to understand the system's decisions aided by visualizing the results. For this study, interpretability has been implemented on Natural Language Processing-based sentiment analysis using data from various social media sites like Twitter, Facebook, and Reddit. With Valence Aware Dictionary for Sentiment Reasoning ("VADER"), heatmaps are generated, which account for visual justification of the result, increasing comprehensibility. Furthermore, Locally Interpretable Model-Agnostic Explanations ("LIME") have been used to provide in-depth insight into the predictions. It has been found experimentally that the proposed system can surpass several contemporary systems designed to attempt interpretability.

摘要

如今,得益于不断增长的计算数据和处理能力,人工智能正在实现惊人的实时性能。然而,对于系统结果所传达的内容却知之甚少;因此,它们有受到偏差影响的风险,而且由于人工智能(“AI”)几乎扎根于各个领域,即使是极小的偏差也可能导致巨大的损害。为了解决公平性、问责制和透明度问题,人们已经在努力使人工智能具有可解释性。本文提出了两种独特的方法,通过可视化结果来辅助理解系统的决策。在本研究中,利用来自推特、脸书和红迪网等各种社交媒体网站的数据,在基于自然语言处理的情感分析上实现了可解释性。借助情感推理效价感知词典(“VADER”)生成热图,这些热图为结果提供了可视化的依据,提高了可理解性。此外,局部可解释模型无关解释(“LIME”)已被用于深入洞察预测结果。通过实验发现,所提出的系统能够超越几个旨在尝试实现可解释性的当代系统。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5109/9892668/a1015550fdbd/11042_2023_14432_Fig1_HTML.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验