Huang Xin, Zhang Ranqiao, Li Yuanyuan, Yang Fan, Zhu Zhiqin, Zhou Zhihao
College of Automation, Chongqing University of Posts and Telecommunications, Nan'an District, 400065, Chongqing, China.
Neural Netw. 2025 Apr;184:107055. doi: 10.1016/j.neunet.2024.107055. Epub 2024 Dec 20.
Multi-view clustering can better handle high-dimensional data by combining information from multiple views, which is important in big data mining. However, the existing models which simply perform feature fusion after feature extraction for individual views, mostly fails to capture the holistic attribute information of multi-view data due to ignoring the significant disparities among views, which seriously affects the performance of multi-view clustering. In this paper, inspired by the attention mechanism, an approach called Multi-View Fusion Clustering with Attentive Contrastive Learning (MFC-ACL) is proposed to tackle these issues. Here, the Att-AE module which optimizes AE using Attention Networks, is firstly constructed to extract view features with global information effectively. To obtain consistent features of multi-view data from various perspectives, a Transformer Feature Fusion Contrastive Module (TFFC) is introduced to combine and learn the extracted low-dimensional features in a contrastive manner. Finally, the optimized clustering results can be derived by clustering the resulting high-level features with shared consistency information. Adequate experimental results indicate that the proposed approach presents better clustering compared to state-of-the-art methods on six benchmark datasets.
多视图聚类通过结合来自多个视图的信息,可以更好地处理高维数据,这在大数据挖掘中很重要。然而,现有的模型大多只是在对各个视图进行特征提取后简单地进行特征融合,由于忽略了视图之间的显著差异,大多无法捕捉多视图数据的整体属性信息,这严重影响了多视图聚类的性能。在本文中,受注意力机制的启发,提出了一种名为基于注意力对比学习的多视图融合聚类(MFC-ACL)的方法来解决这些问题。这里,首先构建了使用注意力网络优化自动编码器(AE)的注意力自动编码器(Att-AE)模块,以有效地提取具有全局信息的视图特征。为了从不同角度获得多视图数据的一致特征,引入了一个Transformer特征融合对比模块(TFFC),以对比的方式组合和学习提取的低维特征。最后,通过对具有共享一致性信息的高层次特征进行聚类,可以得到优化后的聚类结果。充分的实验结果表明,与六种基准数据集上的现有方法相比,所提出的方法具有更好的聚类效果。