Peng Zhihao, Liu Hui, Jia Yuheng, Hou Junhui
IEEE Trans Image Process. 2022;31:3430-3439. doi: 10.1109/TIP.2022.3171421. Epub 2022 May 11.
Deep self-expressiveness-based subspace clustering methods have demonstrated effectiveness. However, existing works only consider the attribute information to conduct the self-expressiveness, limiting the clustering performance. In this paper, we propose a novel adaptive attribute and structure subspace clustering network (AASSC-Net) to simultaneously consider the attribute and structure information in an adaptive graph fusion manner. Specifically, we first exploit an auto-encoder to represent input data samples with latent features for the construction of an attribute matrix. We also construct a mixed signed and symmetric structure matrix to capture the local geometric structure underlying data samples. Then, we perform self-expressiveness on the constructed attribute and structure matrices to learn their affinity graphs separately. Finally, we design a novel attention-based fusion module to adaptively leverage these two affinity graphs to construct a more discriminative affinity graph. Extensive experimental results on commonly used benchmark datasets demonstrate that our AASSC-Net significantly outperforms state-of-the-art methods. In addition, we conduct comprehensive ablation studies to discuss the effectiveness of the designed modules. The code is publicly available at https://github.com/ZhihaoPENG-CityU/AASSC-Net.
基于深度自我表达的子空间聚类方法已证明其有效性。然而,现有工作仅考虑属性信息来进行自我表达,限制了聚类性能。在本文中,我们提出了一种新颖的自适应属性和结构子空间聚类网络(AASSC-Net),以自适应图融合的方式同时考虑属性和结构信息。具体而言,我们首先利用自动编码器用潜在特征表示输入数据样本,以构建属性矩阵。我们还构建了一个混合的带符号对称结构矩阵,以捕获数据样本背后的局部几何结构。然后,我们对构建的属性矩阵和结构矩阵进行自我表达,以分别学习它们的亲和图。最后,我们设计了一个新颖的基于注意力的融合模块,以自适应地利用这两个亲和图来构建一个更具判别力的亲和图。在常用基准数据集上的大量实验结果表明,我们的AASSC-Net显著优于现有方法。此外,我们进行了全面的消融研究,以讨论所设计模块的有效性。代码可在https://github.com/ZhihaoPENG-CityU/AASSC-Net上公开获取。