• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

通过可视化解释社交媒体文本上的情感分析结果。

Explaining sentiment analysis results on social media texts through visualization.

作者信息

Jain Rachna, Kumar Ashish, Nayyar Anand, Dewan Kritika, Garg Rishika, Raman Shatakshi, Ganguly Sahil

机构信息

Bhagwan Parshuram Institute of Technology, New Delhi, 110089 India.

School of Computer Science Engineering and Technology, Bennett University, Uttar Pradesh, India.

出版信息

Multimed Tools Appl. 2023;82(15):22613-22629. doi: 10.1007/s11042-023-14432-y. Epub 2023 Feb 2.

DOI:10.1007/s11042-023-14432-y
PMID:36747895
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC9892668/
Abstract

Today, Artificial Intelligence is achieving prodigious real-time performance, thanks to growing computational data and power capacities. However, there is little knowledge about what system results convey; thus, they are at risk of being susceptible to bias, and with the roots of Artificial Intelligence ("AI") in almost every territory, even a minuscule bias can result in excessive damage. Efforts towards making AI interpretable have been made to address fairness, accountability, and transparency concerns. This paper proposes two unique methods to understand the system's decisions aided by visualizing the results. For this study, interpretability has been implemented on Natural Language Processing-based sentiment analysis using data from various social media sites like Twitter, Facebook, and Reddit. With Valence Aware Dictionary for Sentiment Reasoning ("VADER"), heatmaps are generated, which account for visual justification of the result, increasing comprehensibility. Furthermore, Locally Interpretable Model-Agnostic Explanations ("LIME") have been used to provide in-depth insight into the predictions. It has been found experimentally that the proposed system can surpass several contemporary systems designed to attempt interpretability.

摘要

如今,得益于不断增长的计算数据和处理能力,人工智能正在实现惊人的实时性能。然而,对于系统结果所传达的内容却知之甚少;因此,它们有受到偏差影响的风险,而且由于人工智能(“AI”)几乎扎根于各个领域,即使是极小的偏差也可能导致巨大的损害。为了解决公平性、问责制和透明度问题,人们已经在努力使人工智能具有可解释性。本文提出了两种独特的方法,通过可视化结果来辅助理解系统的决策。在本研究中,利用来自推特、脸书和红迪网等各种社交媒体网站的数据,在基于自然语言处理的情感分析上实现了可解释性。借助情感推理效价感知词典(“VADER”)生成热图,这些热图为结果提供了可视化的依据,提高了可理解性。此外,局部可解释模型无关解释(“LIME”)已被用于深入洞察预测结果。通过实验发现,所提出的系统能够超越几个旨在尝试实现可解释性的当代系统。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5109/9892668/76c5a84fd30b/11042_2023_14432_Fig10_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5109/9892668/a1015550fdbd/11042_2023_14432_Fig1_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5109/9892668/34586639912e/11042_2023_14432_Fig2_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5109/9892668/0e6d948d4915/11042_2023_14432_Fig3_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5109/9892668/05c3223a330e/11042_2023_14432_Fig4_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5109/9892668/f501ae8eb349/11042_2023_14432_Fig5_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5109/9892668/5b611637d13f/11042_2023_14432_Fig6_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5109/9892668/a0f2a558fac1/11042_2023_14432_Fig7_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5109/9892668/1641de98b14c/11042_2023_14432_Fig8_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5109/9892668/77bf5ae2b00d/11042_2023_14432_Fig9_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5109/9892668/76c5a84fd30b/11042_2023_14432_Fig10_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5109/9892668/a1015550fdbd/11042_2023_14432_Fig1_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5109/9892668/34586639912e/11042_2023_14432_Fig2_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5109/9892668/0e6d948d4915/11042_2023_14432_Fig3_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5109/9892668/05c3223a330e/11042_2023_14432_Fig4_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5109/9892668/f501ae8eb349/11042_2023_14432_Fig5_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5109/9892668/5b611637d13f/11042_2023_14432_Fig6_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5109/9892668/a0f2a558fac1/11042_2023_14432_Fig7_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5109/9892668/1641de98b14c/11042_2023_14432_Fig8_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5109/9892668/77bf5ae2b00d/11042_2023_14432_Fig9_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5109/9892668/76c5a84fd30b/11042_2023_14432_Fig10_HTML.jpg

相似文献

1
Explaining sentiment analysis results on social media texts through visualization.通过可视化解释社交媒体文本上的情感分析结果。
Multimed Tools Appl. 2023;82(15):22613-22629. doi: 10.1007/s11042-023-14432-y. Epub 2023 Feb 2.
2
Toward explainable AI (XAI) for mental health detection based on language behavior.迈向基于语言行为的可解释人工智能(XAI)用于心理健康检测。
Front Psychiatry. 2023 Dec 7;14:1219479. doi: 10.3389/fpsyt.2023.1219479. eCollection 2023.
3
Explainable AI for Bioinformatics: Methods, Tools and Applications.可解释人工智能在生物信息学中的应用:方法、工具与应用。
Brief Bioinform. 2023 Sep 20;24(5). doi: 10.1093/bib/bbad236.
4
Enhancing the Interpretability of Malaria and Typhoid Diagnosis with Explainable AI and Large Language Models.利用可解释人工智能和大语言模型提高疟疾和伤寒诊断的可解释性
Trop Med Infect Dis. 2024 Sep 16;9(9):216. doi: 10.3390/tropicalmed9090216.
5
Vaccine sentiment analysis using BERT + NBSVM and geo-spatial approaches.使用BERT + NBSVM和地理空间方法的疫苗情绪分析。
J Supercomput. 2023 May 7:1-31. doi: 10.1007/s11227-023-05319-8.
6
Sentiment analysis in medication adherence: using ruled-based and artificial intelligence-driven algorithms to understand patient medication experiences.药物依从性中的情感分析:运用基于规则和人工智能驱动的算法来理解患者的用药体验。
Int J Clin Pharm. 2024 Oct 4. doi: 10.1007/s11096-024-01803-0.
7
Topics and Sentiment Surrounding Vaping on Twitter and Reddit During the 2019 e-Cigarette and Vaping Use-Associated Lung Injury Outbreak: Comparative Study.主题和情绪围绕着 2019 年电子烟和蒸气相关肺损伤爆发期间 Twitter 和 Reddit 上的蒸气:比较研究。
J Med Internet Res. 2022 Dec 13;24(12):e39460. doi: 10.2196/39460.
8
Locating Loneliness Through Social Intelligence Analysis.通过社会智能分析定位孤独感。
Stud Health Technol Inform. 2024 Jan 25;310:594-598. doi: 10.3233/SHTI231034.
9
New explainability method for BERT-based model in fake news detection.基于 BERT 的模型在假新闻检测中的新可解释性方法。
Sci Rep. 2021 Dec 8;11(1):23705. doi: 10.1038/s41598-021-03100-6.
10
Explainable depression symptom detection in social media.社交媒体中可解释的抑郁症状检测
Health Inf Sci Syst. 2024 Sep 6;12(1):47. doi: 10.1007/s13755-024-00303-9. eCollection 2024 Dec.

引用本文的文献

1
Exploring Psychological Trends in Populations With Chronic Obstructive Pulmonary Disease During COVID-19 and Beyond: Large-Scale Longitudinal Twitter Mining Study.探索2019年冠状病毒病期间及之后慢性阻塞性肺疾病患者群体的心理趋势:大规模纵向推特挖掘研究
J Med Internet Res. 2025 Mar 5;27:e54543. doi: 10.2196/54543.
2
A hybrid self-supervised model predicting life satisfaction in South Korea.一种预测韩国民众生活满意度的混合自监督模型。
Front Public Health. 2024 Oct 17;12:1445864. doi: 10.3389/fpubh.2024.1445864. eCollection 2024.
3
Deep Learning Paradigm and Its Bias for Coronary Artery Wall Segmentation in Intravascular Ultrasound Scans: A Closer Look.

本文引用的文献

1
Machine learning modeling practices to support the principles of AI and ethics in nutrition research.支持营养研究中人工智能和伦理原则的机器学习建模实践。
Nutr Diabetes. 2022 Dec 2;12(1):48. doi: 10.1038/s41387-022-00226-y.
2
On Interpretability of Artificial Neural Networks: A Survey.人工神经网络的可解释性:一项综述。
IEEE Trans Radiat Plasma Med Sci. 2021 Nov;5(6):741-760. doi: 10.1109/trpms.2021.3066428. Epub 2021 Mar 17.
3
A Survey on Explainable Artificial Intelligence (XAI): Toward Medical XAI.可解释人工智能(XAI)研究综述:迈向医学 XAI
深度学习范式及其在血管内超声扫描中对冠状动脉壁分割的偏差:深入研究
J Cardiovasc Dev Dis. 2023 Dec 4;10(12):485. doi: 10.3390/jcdd10120485.
IEEE Trans Neural Netw Learn Syst. 2021 Nov;32(11):4793-4813. doi: 10.1109/TNNLS.2020.3027314. Epub 2021 Oct 27.
4
Artificial Intelligence and Black-Box Medical Decisions: Accuracy versus Explainability.人工智能与医疗决策的黑箱:准确性与可解释性
Hastings Cent Rep. 2019 Jan;49(1):15-21. doi: 10.1002/hast.973.
5
Can we open the black box of AI?我们能打开人工智能的黑匣子吗?
Nature. 2016 Oct 6;538(7623):20-23. doi: 10.1038/538020a.
6
A generalized LSTM-like training algorithm for second-order recurrent neural networks.二阶递归神经网络的广义 LSTM 样训练算法。
Neural Netw. 2012 Jan;25(1):70-83. doi: 10.1016/j.neunet.2011.07.003. Epub 2011 Jul 18.
7
FACCT (Foundation for Accountability): a large measure of quality.问责制基金会(FACCT):衡量质量的重要标准。
J AHIMA. 1997 Jun;68(6):41-6.