• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

社交媒体中可解释的抑郁症状检测

Explainable depression symptom detection in social media.

作者信息

Bao Eliseo, Pérez Anxo, Parapar Javier

机构信息

Information Retrieval Lab (IRLab), Centro de Investigación en Tecnoloxías da Información e da Comunicación (CITIC), Campus de Elviña, 15071 A Coruña, Galicia Spain.

出版信息

Health Inf Sci Syst. 2024 Sep 6;12(1):47. doi: 10.1007/s13755-024-00303-9. eCollection 2024 Dec.

DOI:10.1007/s13755-024-00303-9
PMID:39247905
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC11379836/
Abstract

Users of social platforms often perceive these sites as supportive spaces to post about their mental health issues. Those conversations contain important traces about individuals' health risks. Recently, researchers have exploited this online information to construct mental health detection models, which aim to identify users at risk on platforms like Twitter, Reddit or Facebook. Most of these models are focused on achieving good classification results, ignoring the explainability and interpretability of the decisions. Recent research has pointed out the importance of using clinical markers, such as the use of symptoms, to improve trust in the computational models by health professionals. In this paper, we introduce transformer-based architectures designed to detect and explain the appearance of depressive symptom markers in user-generated content from social media. We present two approaches: (i) train a model to classify, and another one to explain the classifier's decision separately and (ii) unify the two tasks simultaneously within a single model. Additionally, for this latter manner, we also investigated the performance of recent conversational Large Language Models (LLMs) utilizing both in-context learning and finetuning. Our models provide natural language explanations, aligning with validated symptoms, thus enabling clinicians to interpret the decisions more effectively. We evaluate our approaches using recent symptom-focused datasets, using both offline metrics and expert-in-the-loop evaluations to assess the quality of our models' explanations. Our findings demonstrate that it is possible to achieve good classification results while generating interpretable symptom-based explanations.

摘要

社交平台用户常常将这些网站视为可以发布自身心理健康问题的支持性空间。这些对话包含了有关个人健康风险的重要线索。最近,研究人员利用这些在线信息构建心理健康检测模型,旨在识别推特、红迪网或脸书等平台上有风险的用户。这些模型大多专注于取得良好的分类结果,而忽略了决策的可解释性和可阐释性。最近的研究指出了使用临床指标(如症状的使用)以提高健康专业人员对计算模型信任度的重要性。在本文中,我们介绍了基于Transformer的架构,旨在检测并解释社交媒体用户生成内容中抑郁症状指标的出现情况。我们提出了两种方法:(i)训练一个模型进行分类,再训练另一个模型单独解释分类器的决策;(ii)在单个模型中同时统一这两项任务。此外,对于后一种方式,我们还研究了利用上下文学习和微调的近期对话式大语言模型(LLM)的性能。我们的模型提供与已验证症状相符的自然语言解释,从而使临床医生能够更有效地解释决策。我们使用近期以症状为重点的数据集评估我们的方法,同时使用离线指标和专家参与评估来评估我们模型解释的质量。我们的研究结果表明,在生成基于症状的可解释性解释的同时,有可能取得良好的分类结果。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/65f4/11379836/b04ac27322a6/13755_2024_303_Fig5_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/65f4/11379836/ea615ead04e5/13755_2024_303_Fig1_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/65f4/11379836/dabb58c5c336/13755_2024_303_Fig2_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/65f4/11379836/52d0e895db23/13755_2024_303_Fig3_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/65f4/11379836/2ff516cbce04/13755_2024_303_Fig4_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/65f4/11379836/b04ac27322a6/13755_2024_303_Fig5_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/65f4/11379836/ea615ead04e5/13755_2024_303_Fig1_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/65f4/11379836/dabb58c5c336/13755_2024_303_Fig2_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/65f4/11379836/52d0e895db23/13755_2024_303_Fig3_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/65f4/11379836/2ff516cbce04/13755_2024_303_Fig4_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/65f4/11379836/b04ac27322a6/13755_2024_303_Fig5_HTML.jpg

相似文献

1
Explainable depression symptom detection in social media.社交媒体中可解释的抑郁症状检测
Health Inf Sci Syst. 2024 Sep 6;12(1):47. doi: 10.1007/s13755-024-00303-9. eCollection 2024 Dec.
2
Topics and Sentiment Surrounding Vaping on Twitter and Reddit During the 2019 e-Cigarette and Vaping Use-Associated Lung Injury Outbreak: Comparative Study.主题和情绪围绕着 2019 年电子烟和蒸气相关肺损伤爆发期间 Twitter 和 Reddit 上的蒸气:比较研究。
J Med Internet Res. 2022 Dec 13;24(12):e39460. doi: 10.2196/39460.
3
An Explainable Artificial Intelligence Text Classifier for Suicidality Prediction in Youth Crisis Text Line Users: Development and Validation Study.用于青少年危机短信热线用户自杀倾向预测的可解释人工智能文本分类器:开发与验证研究
JMIR Public Health Surveill. 2025 Jan 29;11:e63809. doi: 10.2196/63809.
4
Toward explainable AI (XAI) for mental health detection based on language behavior.迈向基于语言行为的可解释人工智能(XAI)用于心理健康检测。
Front Psychiatry. 2023 Dec 7;14:1219479. doi: 10.3389/fpsyt.2023.1219479. eCollection 2023.
5
COVID-Net Biochem: an explainability-driven framework to building machine learning models for predicting survival and kidney injury of COVID-19 patients from clinical and biochemistry data.COVID-Net 生化:一个基于可解释性的框架,用于构建基于临床和生化数据预测 COVID-19 患者生存和肾脏损伤的机器学习模型。
Sci Rep. 2023 Oct 9;13(1):17001. doi: 10.1038/s41598-023-42203-0.
6
Large Language Models' Accuracy in Emulating Human Experts' Evaluation of Public Sentiments about Heated Tobacco Products on Social Media: Evaluation Study.大型语言模型在模拟人类专家对社交媒体上关于加热烟草制品的公众情绪评估方面的准确性:评估研究。
J Med Internet Res. 2025 Mar 4;27:e63631. doi: 10.2196/63631.
7
Explaining sentiment analysis results on social media texts through visualization.通过可视化解释社交媒体文本上的情感分析结果。
Multimed Tools Appl. 2023;82(15):22613-22629. doi: 10.1007/s11042-023-14432-y. Epub 2023 Feb 2.
8
Identifying Topics for E-Cigarette User-Generated Contents: A Case Study From Multiple Social Media Platforms.识别电子烟用户生成内容的主题:来自多个社交媒体平台的案例研究
J Med Internet Res. 2017 Jan 20;19(1):e24. doi: 10.2196/jmir.5780.
9
Predicting Age Groups of Reddit Users Based on Posting Behavior and Metadata: Classification Model Development and Validation.基于发帖行为和元数据预测 Reddit 用户年龄组:分类模型的开发和验证。
JMIR Public Health Surveill. 2021 Mar 16;7(3):e25807. doi: 10.2196/25807.
10
Large Language Models and User Trust: Consequence of Self-Referential Learning Loop and the Deskilling of Health Care Professionals.大语言模型与用户信任:自我参照学习循环的后果及医疗保健专业人员的技能退化
J Med Internet Res. 2024 Apr 25;26:e56764. doi: 10.2196/56764.

引用本文的文献

1
: enhancing the transferability of large language models for depression detection using free-text explanations.利用自由文本解释提高大语言模型在抑郁症检测中的可迁移性。
Front Artif Intell. 2025 May 21;8:1564828. doi: 10.3389/frai.2025.1564828. eCollection 2025.
2
Deep Learning-Based Detection of Depression and Suicidal Tendencies in Social Media Data with Feature Selection.基于深度学习的社交媒体数据中抑郁症和自杀倾向检测与特征选择
Behav Sci (Basel). 2025 Mar 12;15(3):352. doi: 10.3390/bs15030352.

本文引用的文献

1
MHA: a multimodal hierarchical attention model for depression detection in social media.MHA:一种用于社交媒体中抑郁症检测的多模态分层注意力模型。
Health Inf Sci Syst. 2023 Jan 18;11(1):6. doi: 10.1007/s13755-022-00197-5. eCollection 2023 Dec.
2
An explainable predictive model for suicide attempt risk using an ensemble learning and Shapley Additive Explanations (SHAP) approach.一种使用集成学习和夏普利值加法解释(SHAP)方法的自杀未遂风险可解释预测模型。
Asian J Psychiatr. 2023 Jan;79:103316. doi: 10.1016/j.ajp.2022.103316. Epub 2022 Nov 7.
3
The promise of a model-based psychiatry: building computational models of mental ill health.
基于模型的精神病学的前景:构建精神疾病的计算模型。
Lancet Digit Health. 2022 Nov;4(11):e816-e828. doi: 10.1016/S2589-7500(22)00152-2. Epub 2022 Oct 10.
4
Automatic depression score estimation with word embedding models.使用词嵌入模型进行自动抑郁评分估计。
Artif Intell Med. 2022 Oct;132:102380. doi: 10.1016/j.artmed.2022.102380. Epub 2022 Aug 24.
5
The role of explainability in creating trustworthy artificial intelligence for health care: A comprehensive survey of the terminology, design choices, and evaluation strategies.可解释性在医疗保健人工智能可信性构建中的作用:术语、设计选择和评估策略的全面调查。
J Biomed Inform. 2021 Jan;113:103655. doi: 10.1016/j.jbi.2020.103655. Epub 2020 Dec 10.
6
Lightme: analysing language in internet support groups for mental health.Lightme:分析心理健康互联网支持小组中的语言。
Health Inf Sci Syst. 2020 Oct 13;8(1):34. doi: 10.1007/s13755-020-00115-7. eCollection 2020 Dec.
7
Stigma, biomarkers, and algorithmic bias: recommendations for precision behavioral health with artificial intelligence.污名、生物标志物与算法偏差:人工智能在精准行为健康领域的建议
JAMIA Open. 2020 Jan 22;3(1):9-15. doi: 10.1093/jamiaopen/ooz054. eCollection 2020 Apr.
8
Early Detection of Depression: Social Network Analysis and Random Forest Techniques.抑郁症的早期检测:社交网络分析与随机森林技术
J Med Internet Res. 2019 Jun 10;21(6):e12554. doi: 10.2196/12554.
9
Depression detection from social network data using machine learning techniques.使用机器学习技术从社交网络数据中检测抑郁症。
Health Inf Sci Syst. 2018 Aug 27;6(1):8. doi: 10.1007/s13755-018-0046-0. eCollection 2018 Dec.
10
Understanding Depressive Symptoms and Psychosocial Stressors on Twitter: A Corpus-Based Study.基于语料库研究推特上的抑郁症状与心理社会压力源
J Med Internet Res. 2017 Feb 28;19(2):e48. doi: 10.2196/jmir.6895.