He Tao, Gao Lianli, Song Jingkuan, Li Yuan-Fang
IEEE Trans Neural Netw Learn Syst. 2023 Aug;34(8):4791-4802. doi: 10.1109/TNNLS.2021.3129280. Epub 2023 Aug 4.
Learning accurate low-dimensional embeddings for a network is a crucial task as it facilitates many downstream network analytics tasks. For large networks, the trained embeddings often require a significant amount of space to store, making storage and processing a challenge. Building on our previous work on semisupervised network embedding, we develop d-SNEQ, a differentiable DNN-based quantization method for network embedding. d-SNEQ incorporates a rank loss to equip the learned quantization codes with rich high-order information and is able to substantially compress the size of trained embeddings, thus reducing storage footprint and accelerating retrieval speed. We also propose a new evaluation metric, path prediction, to fairly and more directly evaluate the model performance on the preservation of high-order information. Our evaluation on four real-world networks of diverse characteristics shows that d-SNEQ outperforms a number of state-of-the-art embedding methods in link prediction, path prediction, node classification, and node recommendation while being far more space- and time-efficient.
为网络学习精确的低维嵌入是一项关键任务,因为它有助于许多下游网络分析任务。对于大型网络,训练后的嵌入通常需要大量空间来存储,这给存储和处理带来了挑战。基于我们之前在半监督网络嵌入方面的工作,我们开发了d-SNEQ,一种基于可微深度神经网络的网络嵌入量化方法。d-SNEQ引入了秩损失,以使学习到的量化码具备丰富的高阶信息,并且能够大幅压缩训练后嵌入的大小,从而减少存储占用并加快检索速度。我们还提出了一种新的评估指标——路径预测,以公平且更直接地评估模型在保留高阶信息方面的性能。我们对四个具有不同特征的真实世界网络进行的评估表明,d-SNEQ在链路预测、路径预测、节点分类和节点推荐方面优于许多现有的嵌入方法,同时在空间和时间效率上要高得多。