Yang Ying, Wen Lifen, Li Li
Shaanxi Institute of Teacher Development, Xi'an, China.
School of Teacher Development, Shaanxi Normal University, Xi'an, China.
Front Med (Lausanne). 2025 Jun 26;12:1591793. doi: 10.3389/fmed.2025.1591793. eCollection 2025.
The integration of Explainable Artificial Intelligence (XAI) into time series prediction plays a pivotal role in advancing economic mental health analysis, ensuring both transparency and interpretability in predictive models. Traditional deep learning approaches, while highly accurate, often operate as black boxes, making them less suitable for high-stakes domains such as mental health forecasting, where explainability is critical for trust and decision-making. Existing explainability methods provide only partial insights, limiting their practical application in sensitive domains like mental health analytics.
To address these challenges, we propose a novel framework that integrates explainability directly within the time series prediction process, combining both intrinsic and post-hoc interpretability techniques. Our approach systematically incorporates feature attribution, causal reasoning, and human-centric explanation generation using an interpretable model architecture.
Experimental results demonstrate that our method maintains competitive accuracy while significantly improving interpretability. The proposed framework supports more informed decision-making for policymakers and mental health professionals.
This framework ensures that AI-driven mental health screening tools remain not only highly accurate but also trustworthy, interpretable, and aligned with domain-specific knowledge, ultimately bridging the gap between predictive performance and human understanding.
将可解释人工智能(XAI)集成到时间序列预测中,对于推进经济心理健康分析起着关键作用,可确保预测模型的透明度和可解释性。传统的深度学习方法虽然准确率很高,但往往像黑匣子一样运作,不太适用于心理健康预测等高风险领域,在这些领域中,可解释性对于信任和决策至关重要。现有的可解释性方法只能提供部分见解,限制了它们在心理健康分析等敏感领域的实际应用。
为应对这些挑战,我们提出了一个新颖的框架,该框架将可解释性直接集成到时间序列预测过程中,结合了内在和事后可解释性技术。我们的方法使用可解释的模型架构,系统地纳入了特征归因、因果推理和以人类为中心的解释生成。
实验结果表明,我们的方法在保持竞争力的准确率的同时,显著提高了可解释性。所提出的框架为政策制定者和心理健康专业人员支持更明智的决策。
该框架确保人工智能驱动的心理健康筛查工具不仅保持高度准确,而且值得信赖、可解释,并与特定领域知识保持一致,最终弥合预测性能与人类理解之间的差距。