Jia Xiaowen, Chen Jingxia, Liu Kexin, Wang Qian, He Jialing
College of Electronic Information and Artificial Intelligence, Shaanxi University of Science and Technology, Xi'an, Shaanxi, China.
Math Biosci Eng. 2025 Feb 27;22(3):652-676. doi: 10.3934/mbe.2025024.
Traditional depression detection methods typically rely on single-modal data, but these approaches are limited by individual differences, noise interference, and emotional fluctuations. To address the low accuracy in single-modal depression detection and the poor fusion of multimodal features from electroencephalogram (EEG) and speech signals, we have proposed a multimodal depression detection model based on EEG and speech signals, named the multi-head attention-GCN_ViT (MHA-GCN_ViT). This approach leverages deep learning techniques, including graph convolutional networks (GCN) and vision transformers (ViT), to effectively extract and fuse the frequency-domain features and spatiotemporal characteristics of EEG signals with the frequency-domain features of speech signals. First, a discrete wavelet transform (DWT) was used to extract wavelet features from 29 channels of EEG signals. These features serve as node attributes for the construction of a feature matrix, calculating the Pearson correlation coefficient between channels, from which an adjacency matrix is constructed to represent the brain network structure. This structure was then fed into a graph convolutional network (GCN) for deep feature learning. A multi-head attention mechanism was introduced to enhance the GCN's capability in representing brain networks. Using a short-time Fourier transform (STFT), we extracted 2D spectral features of EEG signals and mel spectrogram features of speech signals. Both were further processed using a vision transformer (ViT) to obtain deep features. Finally, the multiple features from EEG and speech spectrograms were fused at the decision level for depression classification. A five-fold cross-validation on the MODMA dataset demonstrated the model's accuracy, precision, recall, and F1 score of 89.03%, 90.16%, 89.04%, and 88.83%, respectively, indicating a significant improvement in the performance of multimodal depression detection. Furthermore, MHA-GCN_ViT demonstrated robust performance in depression detection and exhibited broad applicability, with potential for extension to multimodal detection tasks in other psychological and neurological disorders.
传统的抑郁症检测方法通常依赖单模态数据,但这些方法受到个体差异、噪声干扰和情绪波动的限制。为了解决单模态抑郁症检测中准确率低以及脑电图(EEG)和语音信号多模态特征融合不佳的问题,我们提出了一种基于EEG和语音信号的多模态抑郁症检测模型,名为多头注意力-GCN_ViT(MHA-GCN_ViT)。该方法利用深度学习技术,包括图卷积网络(GCN)和视觉变换器(ViT),有效地提取EEG信号的频域特征和时空特征,并与语音信号的频域特征进行融合。首先,使用离散小波变换(DWT)从29个EEG信号通道中提取小波特征。这些特征用作构建特征矩阵的节点属性,计算通道之间的皮尔逊相关系数,由此构建邻接矩阵以表示脑网络结构。然后将该结构输入到图卷积网络(GCN)中进行深度特征学习。引入多头注意力机制以增强GCN表示脑网络的能力。使用短时傅里叶变换(STFT),我们提取了EEG信号的二维频谱特征和语音信号的梅尔频谱图特征。两者都进一步使用视觉变换器(ViT)进行处理以获得深度特征。最后,将来自EEG和语音频谱图的多个特征在决策层面进行融合以进行抑郁症分类。在MODMA数据集上进行的五折交叉验证表明,该模型的准确率、精确率、召回率和F1分数分别为89.03%、90.16%、89.04%和88.83%,表明多模态抑郁症检测性能有显著提高。此外,MHA-GCN_ViT在抑郁症检测中表现出稳健的性能,并具有广泛的适用性,有可能扩展到其他心理和神经疾病的多模态检测任务。