Li Yutong, Liu Zhenyu, Zhou Li, Yuan Xiaoyan, Shangguan Zixuan, Hu Xiping, Hu Bin
Gansu Provincial Key Laboratory of Wearable Computing, Lanzhou University, Lanzhou, China.
Front Neurosci. 2023 May 24;17:1188434. doi: 10.3389/fnins.2023.1188434. eCollection 2023.
Deep-learn methods based on convolutional neural networks (CNNs) have demonstrated impressive performance in depression analysis. Nevertheless, some critical challenges need to be resolved in these methods: (1) It is still difficult for CNNs to learn long-range inductive biases in the low-level feature extraction of different facial regions because of the spatial locality. (2) It is difficult for a model with only a single attention head to concentrate on various parts of the face simultaneously, leading to less sensitivity to other important facial regions associated with depression. In the case of facial depression recognition, many of the clues come from a few areas of the face simultaneously, e.g., the mouth and eyes.
To address these issues, we present an end-to-end integrated framework called Hybrid Multi-head Cross Attention Network (HMHN), which includes two stages. The first stage consists of the Grid-Wise Attention block (GWA) and Deep Feature Fusion block (DFF) for the low-level visual depression feature learning. In the second stage, we obtain the global representation by encoding high-order interactions among local features with Multi-head Cross Attention block (MAB) and Attention Fusion block (AFB).
We experimented on AVEC2013 and AVEC2014 depression datasets. The results of AVEC 2013 (RMSE = 7.38, MAE = 6.05) and AVEC 2014 (RMSE = 7.60, MAE = 6.01) demonstrated the efficacy of our method and outperformed most of the state-of-the-art video-based depression recognition approaches.
We proposed a deep learning hybrid model for depression recognition by capturing the higher-order interactions between the depression features of multiple facial regions, which can effectively reduce the error in depression recognition and gives great potential for clinical experiments.
基于卷积神经网络(CNN)的深度学习方法在抑郁症分析中展现出了令人瞩目的性能。然而,这些方法仍需解决一些关键挑战:(1)由于空间局部性,CNN在不同面部区域的低级特征提取中学习长距离归纳偏差仍然困难。(2)仅具有单个注意力头的模型难以同时关注面部的各个部分,导致对与抑郁症相关的其他重要面部区域的敏感度较低。在面部抑郁症识别中,许多线索同时来自面部的几个区域,例如嘴巴和眼睛。
为了解决这些问题,我们提出了一个名为混合多头交叉注意力网络(HMHN)的端到端集成框架,该框架包括两个阶段。第一阶段由用于低级视觉抑郁症特征学习的网格注意力块(GWA)和深度特征融合块(DFF)组成。在第二阶段,我们通过使用多头交叉注意力块(MAB)和注意力融合块(AFB)对局部特征之间的高阶交互进行编码来获得全局表示。
我们在AVEC2013和AVEC2014抑郁症数据集上进行了实验。AVEC 2013(均方根误差=7.38,平均绝对误差=6.05)和AVEC 2014(均方根误差=7.60,平均绝对误差=6.01)的结果证明了我们方法的有效性,并且优于大多数基于视频的抑郁症识别方法。
我们提出了一种用于抑郁症识别的深度学习混合模型,通过捕捉多个面部区域的抑郁症特征之间的高阶交互,该模型可以有效减少抑郁症识别中的误差,并为临床实验提供了巨大潜力。