School of Computer Science and Engineering, University of Electronic Science and Technology of China, Sichuan, China; Trusted Cloud Computing and Big Data Key Laboratory of Sichuan Province, China.
School of Computer Science and Engineering, University of Electronic Science and Technology of China, Sichuan, China.
Neural Netw. 2020 Nov;131:93-102. doi: 10.1016/j.neunet.2020.07.014. Epub 2020 Jul 31.
Deep auto-encoders (DAEs) have achieved great success in learning data representations via the powerful representability of neural networks. But most DAEs only focus on the most dominant structures which are able to reconstruct the data from a latent space and neglect rich latent structural information. In this work, we propose a new representation learning method that explicitly models and leverages sample relations, which in turn is used as supervision to guide the representation learning. Different from previous work, our framework well preserves the relations between samples. Since the prediction of pairwise relations themselves is a fundamental problem, our model adaptively learns them from data. This provides much flexibility to encode real data manifold. The important role of relation and representation learning is evaluated on the clustering task. Extensive experiments on benchmark data sets demonstrate the superiority of our approach. By seeking to embed samples into subspace, we further show that our method can address the large-scale and out-of-sample problem. Our source code is publicly available at: https://github.com/nbShawnLu/RGRL.
深度自动编码器 (DAE) 通过神经网络的强大表示能力,在学习数据表示方面取得了巨大成功。但是,大多数 DAE 仅关注能够从潜在空间重建数据的最主要结构,而忽略了丰富的潜在结构信息。在这项工作中,我们提出了一种新的表示学习方法,该方法明确地对样本关系进行建模和利用,进而作为监督来指导表示学习。与以前的工作不同,我们的框架很好地保留了样本之间的关系。由于对成对关系本身的预测是一个基本问题,因此我们的模型可以从数据中自适应地学习它们。这为编码真实数据流形提供了很大的灵活性。在聚类任务上评估了关系和表示学习的重要作用。在基准数据集上进行的广泛实验表明了我们方法的优越性。通过寻求将样本嵌入子空间,我们进一步表明我们的方法可以解决大规模和样本外问题。我们的源代码可在:https://github.com/nbShawnLu/RGRL 上获得。