Liu Di, Baldi Simone, Yu Wenwu, Chen C L Philip
IEEE Trans Neural Netw Learn Syst. 2022 Apr;33(4):1650-1662. doi: 10.1109/TNNLS.2020.3043110. Epub 2022 Apr 4.
The broad learning system (BLS) paradigm has recently emerged as a computationally efficient approach to supervised learning. Its efficiency arises from a learning mechanism based on the method of least-squares. However, the need for storing and inverting large matrices can put the efficiency of such mechanism at risk in big-data scenarios. In this work, we propose a new implementation of BLS in which the need for storing and inverting large matrices is avoided. The distinguishing features of the designed learning mechanism are as follows: 1) the training process can balance between efficient usage of memory and required iterations (hybrid recursive learning) and 2) retraining is avoided when the network is expanded (incremental learning). It is shown that, while the proposed framework is equivalent to the standard BLS in terms of trained network weights,much larger networks than the standard BLS can be smoothly trained by the proposed solution, projecting BLS toward the big-data frontier.
广义学习系统(BLS)范式最近作为一种计算效率高的监督学习方法出现。它的效率源于基于最小二乘法的学习机制。然而,在大数据场景中,存储和求逆大型矩阵的需求可能会危及这种机制的效率。在这项工作中,我们提出了一种新的BLS实现方式,其中避免了存储和求逆大型矩阵的需求。所设计的学习机制的显著特点如下:1)训练过程可以在内存的有效使用和所需迭代次数之间取得平衡(混合递归学习),2)在网络扩展时避免重新训练(增量学习)。结果表明,虽然所提出的框架在训练后的网络权重方面与标准BLS等效,但所提出的解决方案能够顺利训练比标准BLS大得多的网络,将BLS推向大数据前沿。