Zhang Wandong, Yang Yimin, Wu Q M Jonathan, Wang Tianlei, Zhang Hui
IEEE Trans Neural Netw Learn Syst. 2024 May;35(5):6570-6582. doi: 10.1109/TNNLS.2022.3211149. Epub 2024 May 2.
Most multilayer Moore-Penrose inverse (MPI)-based neural networks, such as deep random vector functional link (RVFL), are structured with two separate stages: unsupervised feature encoding and supervised pattern classification. Once the unsupervised learning is finished, the latent encoding is fixed without supervised fine-tuning. However, in complex tasks such as handling the ImageNet dataset, there are often many more clues that can be directly encoded, while unsupervised learning, by definition, cannot know exactly what is useful for a certain task. There is a need to retrain the latent space representations in the supervised pattern classification stage to learn some clues that unsupervised learning has not yet been learned. In particular, the residual error in the output layer is pulled back to each hidden layer, and the parameters of the hidden layers are recalculated with MPI for more robust representations. In this article, a recomputation-based multilayer network using Moore-Penrose inverse (RML-MP) is developed. A sparse RML-MP (SRML-MP) model to boost the performance of RML-MP is then proposed. The experimental results with varying training samples (from 3k to 1.8 million) show that the proposed models provide higher Top-1 testing accuracy than most representation learning algorithms. For reproducibility, the source codes are available at https://github.com/W1AE/Retraining.
大多数基于多层摩尔-彭罗斯逆(MPI)的神经网络,如深度随机向量功能链接(RVFL),都由两个独立阶段构成:无监督特征编码和有监督模式分类。一旦无监督学习完成,潜在编码就会固定下来,无需进行有监督的微调。然而,在处理ImageNet数据集等复杂任务时,通常有更多线索可以直接编码,而根据定义,无监督学习无法确切知道哪些线索对特定任务有用。因此,需要在有监督模式分类阶段重新训练潜在空间表示,以学习一些无监督学习尚未学到的线索。具体而言,输出层的残差会被反向传播到每个隐藏层,并且使用MPI重新计算隐藏层的参数,以获得更稳健的表示。在本文中,我们开发了一种基于重新计算的使用摩尔-彭罗斯逆的多层网络(RML-MP)。随后,我们提出了一种用于提升RML-MP性能的稀疏RML-MP(SRML-MP)模型。使用不同数量训练样本(从3000到180万)的实验结果表明, 所提出的模型比大多数表示学习算法具有更高的Top-1测试准确率。为了便于重现,源代码可在https://github.com/W1AE/Retraining获取。