青少年心理健康的个性化预测与干预:基于Transformer的多模态时间建模

Personalized prediction and intervention for adolescent mental health: multimodal temporal modeling using transformer.

作者信息

Zhang Guiyuan, Li Shuang

机构信息

Student Affairs Department of the Party Committee of Guangxi Vocational College of Water Resources and Electric Power, Nanning, China.

Institute of Semiconductors, Chinese Academy of Sciences, Beijing, China.

出版信息

Front Psychiatry. 2025 Jun 23;16:1579543. doi: 10.3389/fpsyt.2025.1579543. eCollection 2025.

Abstract

INTRODUCTION

Adolescent mental health problems are becoming increasingly serious, making early prediction and personalized intervention important research topics. Existing methods face limitations in handling complex emotional fluctuations and multimodal data fusion.

METHODS

To address these challenges, we propose a novel model, MPHI Trans, which integrates multimodal data and temporal modeling techniques to accurately capture dynamic changes in adolescent mental health status.

RESULTS

Experimental results on the DAIC-WOZ and WESAD datasets demonstrate that MPHI Trans significantly outperforms advanced models such as BERT, T5, and XLNet. On DAIC-WOZ, MPHI Trans achieved an accuracy of 89%, recall of 84%, precision of 85%, F1 score of 84%, and AUC-ROC of 92%. On WESAD, the model attained an accuracy of 88%, recall of 81%, precision of 82%, F1 score of 81%, and AUC-ROC of 91%.

DISCUSSION

Ablation studies confirm the critical contributions of the temporal modeling and multimodal fusion modules, as their removal substantially degrades model performance, underscoring their indispensable roles in capturing emotional fluctuations and information fusion.

摘要

引言

青少年心理健康问题日益严重,使得早期预测和个性化干预成为重要的研究课题。现有方法在处理复杂的情绪波动和多模态数据融合方面存在局限性。

方法

为应对这些挑战,我们提出了一种新型模型MPHI Trans,该模型集成了多模态数据和时间建模技术,以准确捕捉青少年心理健康状况的动态变化。

结果

在DAIC-WOZ和WESAD数据集上的实验结果表明,MPHI Trans显著优于BERT、T5和XLNet等先进模型。在DAIC-WOZ上,MPHI Trans的准确率为89%,召回率为84%,精确率为85%,F1分数为84%,AUC-ROC为92%。在WESAD上,该模型的准确率为88%,召回率为81%,精确率为82%,F1分数为81%,AUC-ROC为91%。

讨论

消融研究证实了时间建模和多模态融合模块的关键贡献,因为去除这些模块会大幅降低模型性能,突出了它们在捕捉情绪波动和信息融合方面不可或缺的作用。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/42fc/12232596/a62078c6678b/fpsyt-16-1579543-g001.jpg

相似文献

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索