Qian Lipeng, Zuo Qiong, Li Dahu, Zhu Hong
School of Computer Science and Technology, Huazhong University of Science and Technology, Wuhan, 430070, Hubei, China.
School of Computer Science and Technology, Huazhong University of Science and Technology, Wuhan, 430070, Hubei, China.
Neural Netw. 2025 May;185:107131. doi: 10.1016/j.neunet.2025.107131. Epub 2025 Jan 17.
In the Imbalanced Multivariate Time Series Classification (ImMTSC) task, minority-class instances typically correspond to critical events, such as system faults in power grids or abnormal health occurrences in medical monitoring. Despite being rare and random, these events are highly significant. The dynamic spatial-temporal relationships between minority-class instances and other instances make them more prone to interference from neighboring instances during classification. Increasing the number of minority-class samples during training often results in overfitting to a single pattern of the minority class. Contrastive learning ensures that majority-class instances learn similar features in the representation space. However, it does not effectively aggregate features from neighboring minority-class instances, hindering its ability to properly represent these instances in the ImMTS dataset. Therefor, we propose a dynamic graph-based mixed supervised contrastive learning method (DGMSCL) that effectively fits minority-class features without increasing their number, while also separating them from other instances in the representation space. First, it reconstructs the input sequence into dynamic graphs and employs a hierarchical attention graph neural network (HAGNN) to generate a discriminative embedding representation between instances. Based on this, we introduce a novel mixed contrast loss, which includes weight-augmented inter-graph supervised contrast (WAIGC) and context-based minority class-aware contrast (MCAC). It adjusts the sample weights based on their quantity and intrinsic characteristics, placing greater emphasis on minority-class loss to produce more effective gradient gains during training. Additionally, it separates minority-class instances from adjacent transitional instances in the representation space, enhancing their representational capacity. Extensive experiments across various scenarios and datasets with differing degrees of imbalance demonstrate that DGMSCL consistently outperforms existing baseline models. Specifically, DGMSCL achieves higher overall classification accuracy, as evidenced by significantly improved average F1-score, G-mean, and kappa coefficient across multiple datasets. Moreover, classification results on a real-world power data show that DGMSCL generalizes well to real-world application.
在不平衡多变量时间序列分类(ImMTSC)任务中,少数类实例通常对应于关键事件,例如电网中的系统故障或医疗监测中的异常健康状况。尽管这些事件罕见且随机,但却非常重要。少数类实例与其他实例之间的动态时空关系使得它们在分类过程中更容易受到相邻实例的干扰。在训练期间增加少数类样本的数量通常会导致过度拟合少数类的单一模式。对比学习可确保多数类实例在表示空间中学习相似的特征。然而,它不能有效地聚合相邻少数类实例的特征,从而阻碍了其在ImMTS数据集中正确表示这些实例的能力。因此,我们提出了一种基于动态图的混合监督对比学习方法(DGMSCL),该方法无需增加少数类特征的数量就能有效地拟合它们,同时还能在表示空间中将它们与其他实例区分开来。首先,它将输入序列重建为动态图,并采用分层注意力图神经网络(HAGNN)在实例之间生成有区分力的嵌入表示。在此基础上,我们引入了一种新颖的混合对比损失,其中包括权重增强的图间监督对比(WAIGC)和基于上下文的少数类感知对比(MCAC)。它根据样本的数量和内在特征调整样本权重,更加重视少数类损失,以便在训练期间产生更有效的梯度增益。此外,它在表示空间中将少数类实例与相邻的过渡实例区分开来,增强了它们的表示能力。在各种场景和不同程度不平衡的数据集上进行的大量实验表明,DGMSCL始终优于现有的基线模型。具体而言,DGMSCL实现了更高的总体分类准确率,多个数据集的平均F1分数、G均值和kappa系数显著提高就证明了这一点。此外,在真实世界电力数据上的分类结果表明,DGMSCL能够很好地推广到实际应用中。