• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

经济心理健康分析中用于时间序列预测的可解释人工智能。

Explainable AI for time series prediction in economic mental health analysis.

作者信息

Yang Ying, Wen Lifen, Li Li

机构信息

Shaanxi Institute of Teacher Development, Xi'an, China.

School of Teacher Development, Shaanxi Normal University, Xi'an, China.

出版信息

Front Med (Lausanne). 2025 Jun 26;12:1591793. doi: 10.3389/fmed.2025.1591793. eCollection 2025.

DOI:10.3389/fmed.2025.1591793
PMID:40641972
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC12241169/
Abstract

INTRODUCTION

The integration of Explainable Artificial Intelligence (XAI) into time series prediction plays a pivotal role in advancing economic mental health analysis, ensuring both transparency and interpretability in predictive models. Traditional deep learning approaches, while highly accurate, often operate as black boxes, making them less suitable for high-stakes domains such as mental health forecasting, where explainability is critical for trust and decision-making. Existing explainability methods provide only partial insights, limiting their practical application in sensitive domains like mental health analytics.

METHODS

To address these challenges, we propose a novel framework that integrates explainability directly within the time series prediction process, combining both intrinsic and post-hoc interpretability techniques. Our approach systematically incorporates feature attribution, causal reasoning, and human-centric explanation generation using an interpretable model architecture.

RESULTS

Experimental results demonstrate that our method maintains competitive accuracy while significantly improving interpretability. The proposed framework supports more informed decision-making for policymakers and mental health professionals.

DISCUSSION

This framework ensures that AI-driven mental health screening tools remain not only highly accurate but also trustworthy, interpretable, and aligned with domain-specific knowledge, ultimately bridging the gap between predictive performance and human understanding.

摘要

引言

将可解释人工智能(XAI)集成到时间序列预测中,对于推进经济心理健康分析起着关键作用,可确保预测模型的透明度和可解释性。传统的深度学习方法虽然准确率很高,但往往像黑匣子一样运作,不太适用于心理健康预测等高风险领域,在这些领域中,可解释性对于信任和决策至关重要。现有的可解释性方法只能提供部分见解,限制了它们在心理健康分析等敏感领域的实际应用。

方法

为应对这些挑战,我们提出了一个新颖的框架,该框架将可解释性直接集成到时间序列预测过程中,结合了内在和事后可解释性技术。我们的方法使用可解释的模型架构,系统地纳入了特征归因、因果推理和以人类为中心的解释生成。

结果

实验结果表明,我们的方法在保持竞争力的准确率的同时,显著提高了可解释性。所提出的框架为政策制定者和心理健康专业人员支持更明智的决策。

讨论

该框架确保人工智能驱动的心理健康筛查工具不仅保持高度准确,而且值得信赖、可解释,并与特定领域知识保持一致,最终弥合预测性能与人类理解之间的差距。

相似文献

1
Explainable AI for time series prediction in economic mental health analysis.经济心理健康分析中用于时间序列预测的可解释人工智能。
Front Med (Lausanne). 2025 Jun 26;12:1591793. doi: 10.3389/fmed.2025.1591793. eCollection 2025.
2
Exploring the Applications of Explainability in Wearable Data Analytics: Systematic Literature Review.探索可解释性在可穿戴数据分析中的应用:系统文献综述
J Med Internet Res. 2024 Dec 24;26:e53863. doi: 10.2196/53863.
3
A Responsible Framework for Assessing, Selecting, and Explaining Machine Learning Models in Cardiovascular Disease Outcomes Among People With Type 2 Diabetes: Methodology and Validation Study.用于评估、选择和解释2型糖尿病患者心血管疾病结局机器学习模型的责任框架:方法与验证研究
JMIR Med Inform. 2025 Jun 27;13:e66200. doi: 10.2196/66200.
4
Stabilizing machine learning for reproducible and explainable results: A novel validation approach to subject-specific insights.稳定机器学习以获得可重复和可解释的结果:一种针对特定个体见解的新型验证方法。
Comput Methods Programs Biomed. 2025 Jun 21;269:108899. doi: 10.1016/j.cmpb.2025.108899.
5
An explainable-by-design end-to-end AI framework based on prototypical part learning for lesion detection and classification in Digital Breast Tomosynthesis images.一种基于原型部分学习的可设计解释的端到端人工智能框架,用于数字乳腺断层合成图像中的病变检测和分类。
Comput Struct Biotechnol J. 2025 Jun 10;27:2649-2660. doi: 10.1016/j.csbj.2025.06.008. eCollection 2025.
6
Advancing personalized healthcare: leveraging explainable AI for BPPV risk assessment.推进个性化医疗:利用可解释人工智能进行良性阵发性位置性眩晕风险评估。
Health Inf Sci Syst. 2024 Nov 24;13(1):1. doi: 10.1007/s13755-024-00317-3. eCollection 2025 Dec.
7
Are Artificial Intelligence Models Listening Like Cardiologists? Bridging the Gap Between Artificial Intelligence and Clinical Reasoning in Heart-Sound Classification Using Explainable Artificial Intelligence.人工智能模型能像心脏病专家一样“聆听”吗?利用可解释人工智能弥合人工智能与心音分类临床推理之间的差距。
Bioengineering (Basel). 2025 May 22;12(6):558. doi: 10.3390/bioengineering12060558.
8
Synergizing advanced algorithm of explainable artificial intelligence with hybrid model for enhanced brain tumor detection in healthcare.将可解释人工智能的先进算法与混合模型相结合,以增强医疗保健中脑肿瘤的检测。
Sci Rep. 2025 Jul 1;15(1):20489. doi: 10.1038/s41598-025-07524-2.
9
Radiology report generation using automatic keyword adaptation, frequency-based multi-label classification and text-to-text large language models.使用自动关键词适配、基于频率的多标签分类和文本到文本的大语言模型生成放射学报告。
Comput Biol Med. 2025 Jul 3;196(Pt A):110625. doi: 10.1016/j.compbiomed.2025.110625.
10
Multimodal interpretable data-driven models for early prediction of multidrug resistance using multivariate time series.使用多变量时间序列进行多药耐药性早期预测的多模态可解释数据驱动模型。
Health Inf Sci Syst. 2025 May 7;13(1):35. doi: 10.1007/s13755-025-00351-9. eCollection 2025 Dec.

本文引用的文献

1
DLAAD-deep learning algorithms assisted diagnosis of chest disease using radiographic medical images.DLAAD——使用放射医学图像的深度学习算法辅助胸部疾病诊断。
Front Med (Lausanne). 2025 Mar 7;11:1511389. doi: 10.3389/fmed.2024.1511389. eCollection 2024.
2
Analysis of ferritinophagy-related genes associated with the prognosis and regulatory mechanisms in non-small cell lung cancer.非小细胞肺癌中铁自噬相关基因与预后及调控机制的分析
Front Med (Lausanne). 2025 Mar 7;12:1480169. doi: 10.3389/fmed.2025.1480169. eCollection 2025.
3
Subjective and objective changes in visual quality after implantable collamer lens implantation for myopia.
近视患者植入可植入式角膜胶原晶状体后视觉质量的主观和客观变化
Front Med (Lausanne). 2025 Mar 7;12:1543864. doi: 10.3389/fmed.2025.1543864. eCollection 2025.
4
Advantages of transformer and its application for medical image segmentation: a survey.Transformer 的优势及其在医学图像分割中的应用:综述。
Biomed Eng Online. 2024 Feb 3;23(1):14. doi: 10.1186/s12938-024-01212-4.
5
What Does a Model Really Look at?: Extracting Model-Oriented Concepts for Explaining Deep Neural Networks.
IEEE Trans Pattern Anal Mach Intell. 2024 Jul;46(7):4612-4624. doi: 10.1109/TPAMI.2024.3357717. Epub 2024 Jun 5.
6
Deep Learning for Time-Series Prediction in IIoT: Progress, Challenges, and Prospects.工业物联网中用于时间序列预测的深度学习:进展、挑战与展望
IEEE Trans Neural Netw Learn Syst. 2024 Nov;35(11):15072-15091. doi: 10.1109/TNNLS.2023.3291371. Epub 2024 Oct 29.
7
Identification of immune microenvironment subtypes and signature genes for Alzheimer's disease diagnosis and risk prediction based on explainable machine learning.基于可解释机器学习的阿尔茨海默病诊断和风险预测的免疫微环境亚型和特征基因的鉴定。
Front Immunol. 2022 Dec 8;13:1046410. doi: 10.3389/fimmu.2022.1046410. eCollection 2022.
8
Multimodal deep learning for Alzheimer's disease dementia assessment.多模态深度学习在阿尔茨海默病痴呆评估中的应用。
Nat Commun. 2022 Jun 20;13(1):3404. doi: 10.1038/s41467-022-31037-5.
9
A convolutional neural network based approach to financial time series prediction.一种基于卷积神经网络的金融时间序列预测方法。
Neural Comput Appl. 2022;34(16):13319-13337. doi: 10.1007/s00521-022-07143-2. Epub 2022 Mar 23.
10
An Accurate GRU-Based Power Time-Series Prediction Approach With Selective State Updating and Stochastic Optimization.一种基于门控循环单元(GRU)的精确电力时间序列预测方法,具有选择性状态更新和随机优化。
IEEE Trans Cybern. 2022 Dec;52(12):13902-13914. doi: 10.1109/TCYB.2021.3121312. Epub 2022 Nov 18.