Xiao Huarun, Hong Zhiyong, Xiong Liping, Zeng Zhiqiang
College of Electronic and Information Engineering, Wuyi University, Jiangmen, Guangdong, China.
PeerJ Comput Sci. 2024 Mar 5;10:e1906. doi: 10.7717/peerj-cs.1906. eCollection 2024.
Advances in deep learning have propelled the evolution of multi-view clustering techniques, which strive to obtain a view-common representation from multi-view datasets. However, the contemporary multi-view clustering community confronts two prominent challenges. One is that view-specific representations lack guarantees to reduce noise introduction, and another is that the fusion process compromises view-specific representations, resulting in the inability to capture efficient information from multi-view data. This may negatively affect the accuracy of the clustering results. In this article, we introduce a novel technique named the "contrastive attentive strategy" to address the above problems. Our approach effectively extracts robust view-specific representations from multi-view data with reduced noise while preserving view completeness. This results in the extraction of consistent representations from multi-view data while preserving the features of view-specific representations. We integrate view-specific encoders, a hybrid attentive module, a fusion module, and deep clustering into a unified framework called AMCFCN. Experimental results on four multi-view datasets demonstrate that our method, AMCFCN, outperforms seven competitive multi-view clustering methods. Our source code is available at https://github.com/xiaohuarun/AMCFCN.
深度学习的进展推动了多视图聚类技术的发展,这些技术致力于从多视图数据集中获得视图通用表示。然而,当代多视图聚类领域面临两个突出挑战。一个是特定视图表示缺乏减少噪声引入的保障,另一个是融合过程会损害特定视图表示,导致无法从多视图数据中捕获有效信息。这可能会对聚类结果的准确性产生负面影响。在本文中,我们引入了一种名为“对比注意力策略”的新技术来解决上述问题。我们的方法有效地从多视图数据中提取出具有减少噪声的鲁棒特定视图表示,同时保留视图完整性。这导致在保留特定视图表示特征的同时,从多视图数据中提取出一致的表示。我们将特定视图编码器、混合注意力模块、融合模块和深度聚类集成到一个名为AMCFCN的统一框架中。在四个多视图数据集上的实验结果表明,我们的方法AMCFCN优于七种具有竞争力的多视图聚类方法。我们的源代码可在https://github.com/xiaohuarun/AMCFCN获取。